Tuesday, October 1, 2019

Analytic Confidence And The New England Patriots: A Hypothesis

"Don't try to stop me!  I'm having a thought!"  (Image Source)
I was driving to work this morning, thinking about analytic confidence (as one does), and I had a thought.  An interesting thought.  Before I tell you what it was, you need to take the one question survey at the link below to see if my thought has any merit (I will post the results as a follow-up to this post):

Which statement seems more clear to you?

Did you take the survey?  If not, go back and take it!

And now?  

OK!  Thanks!

People are often confused by the difference between an estimate and confidence in that estimate.  This confusion is driven, to a very large part, by the way the terms are often (mis)used in formal analytic writing.  It is not uncommon to see someone talk about their confidence when they are really making an estimate or, less commonly, to use estimative language to convey confidence.  

The two concepts, however, are very different.  The estimate communicates what you think is likely (or unlikely) to happen in the future.  Confidence speaks to the likelihood that something is mucking up the process used to establish that estimate.  

This is where the New England Patriots come in.  For example, I think it is very likely that the New England Patriots will win their next game (Note:  I am using the term "very likely" here the same way the DNI does).  I watch football but am by no means an expert.  I don't even know who the Patriots are playing next week.  I just know that they are usually a good team, and that they usually win a lot of games.  So, while I still think it is very likely that the Patriots will win, my confidence in that estimate is low.  The process I used for deriving that estimate was so weak, I won't be surprised to find out that they have a bye next week.

On the other hand, it is easy to imagine a forecaster who is steeped in football lore.  This hypothetical forecaster has an excellent track record derived in large part from a highly structured and efficient process for determining the odds of a victory.  This forecaster might say exactly the same thing I did--the Patriots are very likely to win their next game--but, because of a superior process, this forecaster has high confidence in their estimate.

It is important to convey both--the estimate itself and analytic confidence--when communicating the results of analysis to a decisionmaker.  To do anything less runs the risk of the decisionmaker misinterpreting the findings or assuming things about the process that are not true.  

It is also important to note that the "analytic confidence" mentioned here differs significantly from the far more commonly discussed notion of psychological confidence.  Psychological confidence is a statement about how one "feels" and can often be caused by cognitive bias or environmental factors.  There is no reliable relationship between forecasting accuracy and psychological confidence.  

Analytic confidence, on the other hand, is based on legitimate reasons why the analysis is more likely to be correct.  For example, analysis derived from facts presented by reliable sources is more likely to be correct than analysis derived from sketchy or disreputable sources.  In fact, there are a number of legitimate reasons for more rather than less analytic confidence (you can read about them here).

It is, of course, possible, for analytic and psychological notions of confidence to be consistent, at least in the context of an individual forecast.  I, for example, "feel" that I have no reason to be confident in my estimate about the Patriots.  I also know, as I go down the list of elements responsible for legitimate analytic confidence, that very few are present.  Low applies to both my psychological and analytic variants of confidence, in this case.

That is not normal.  Overconfidence bias is typically the cause of feelings of confidence outpacing a more rational assessment of the quality of the analytic process.  Underconfidence, on the other hand, is typically caused by over-thinking a problem and is more common among experts than you might think.

Now to my thought.  Finally.

One of the big problems with analytic confidence is communicating it to decisionmakers in an intuitive way.  Part of this problem occurs, no doubt, because of the different meanings the word "confidence" can have.  Most people, when they hear the word "confidence" used in casual conversation, assume you mean the psychological kind.  Adding the word "analytic" in front of "confidence" doesn't seem to help much, as most people don't really have a notion of what analytic confidence is or how it differs from the more commonly used, psychological type of confidence (They don't want to know, either.  They have enough to remember).

The classic solution has been to ignore analytic confidence completely.  This is wrong for all the reasons discussed above.  Occasionally, however, analysts elect to include a statement of analytic confidence, typically at the end of the analysis.  Part of this is due to the "Bottomline Up Front (BLUF)" style of writing that is common to analysis.  The logic here is that the most important thing is the estimate.  That becomes the bottomline and, therefore, the first thing mentioned in the paper or briefing.

What if we flip that on its head?  What if we go, at least in casual conversation, with the analytic confidence first?  

Thus you had my two formulations:
  • "It's a low confidence estimate, but the Patriots are very likely to win this week."
  • "The Patriots are very likely to win this week.  This is a low confidence estimate, however."
These two statements say exactly the same thing in terms of content.  However, I think the form of the first statement better communicates what the analyst actually intends.  In other words, I think the first statement establishes a slightly different context.  Furthermore, I think this context will likely help the listener interpret my use of the word "confidence" correctly.  That is, the first statement is better than the second at suggesting that I am using confidence as a way to highlight the process I used to derive the estimate and not just how I feel about it.  

Another reason I think the second statement is inferior is because I think it sounds confusing to the casual listener.  It is theoretically better (the bottomline is definitely up front) but, unless you are steeped in the arcana of analytic writing, it cannot be easily interpreted and could lead to confusion.

That's the reason for the quick poll.  I just wanted to see what you thought--to see, in the words of Gertrude Stein, if there was any there there.  

Thanks and I will post what I found (and my inevitably shocked reaction to it) in a later post.

Monday, September 9, 2019

What Is A "Gray Rhino" And How Do I Tackle One? (+ That Time I Died For 7 Seconds)

A perfectly ordinary gray rhino. 
You still wouldn't want to be surprised by it
.
By Krish Dulal - Own work, CC BY-SA 3.0,
https://commons.wikimedia.org/w/index.php?curid=12888627
I am taking a break today from my series on How To Think About The Future to talk about a new term I just heard:  The Gray Rhino.

A Gray Rhino is basically the opposite of a Black Swan.  It is a high impact, high probability event that not enough people are paying attention to.  

A good example of this may be the recent advances in the biological sciences.  When I began my current job, I asked 20 of the best thinkers I know, "What is the most under-hyped, under-rated technology or trend?"  I wanted to understand what I might be missing, what I should be examining more carefully.

I was surprised at the number of people who came back and said, in one form or another, "Biology."  Whether it is the prospects (and horrors) of gene editing, immunotherapiesmycorrhizal networks, bacterial manipulation, our understanding of the brain, or our ability to create whole new brains from scratch (!), advances in the biological sciences do seem poised to revolutionize our lives, yet it does not seem to get as much attention as other trends like artificial intelligence.  This is a Gray Rhino.  Something that is almost certain to happen, will have a massive impact when it does, but is not getting the attention it deserves.

Not everything is either a Black Swan or a Gray Rhino, however.  A good example may be the hurricane, Dorian, that recently leveled the Bahamas before causing all sorts of havoc up the east coast of the US.  The forecasting models did a good job of estimating where the hurricane would go, and when it would get there.  Likewise, the sheer size of the thing communicated just how devastating it was going to be.  While there are always people who cannot afford to leave the path of a hurricane (or have nowhere to go) or those foolish few who choose to ride it out for the hell of it, most people gave the storm the attention it deserved and did what they could to take appropriate precautions.

As I think about the problem of how to deal with true Gray Rhinos, though, it seems to me that this is not primarily a problem of collection or analysis.  Researchers have enough info in these situations, and they understand it well enough, at least, to raise the issue(s).

It appears to me to be, instead, a problem of production or, more accurately, communication.  Specifically, I think it is related to the Confidence Heuristic.  A heuristic is a fancy word for a rule of thumb but a rule of thumb with a slight difference.  A rule of thumb is often learned (see the video below for an example). 



A heuristic, on the other hand, has developed over evolutionary time scales and is hardwired into the architecture of the brain.  The Confidence Heuristic says that, all other things being equal, we tend to accept the logic/reasoning/forecasts of other people who are confident in their logic/reasoning/forecasts.  We are biologically predisposed to believe those who are confident in their own beliefs.  What is more important is that studies have shown that this is not necessarily a bad rule.  People who are genuinely confident are often right.  

For example, I remember the afternoon I died for seven seconds (It was less dramatic than that sounded...).  Fortunately, I was in one of the best possible places to die for a brief period of time--a hospital.  I had suffered several dizzy spells the day before and had been admitted for observation and had been hooked up to a portable EKG.  When my heart stopped due to sick sinus syndrome, the docs were able to see exactly what had happened.  Shortly after I came around, a cardio surgeon (who I had never met) walked in with the readout, showed it to me, and said, "This buys you a pacemaker."

As they wheeled me to OR, I remember asking the doctor, "How many of these have you done?"  She said, with absolute confidence, "Hundreds," and then she looked me dead in the eye and told me, "This is a piece of cake."

Her confidence in her skills was infectious.  I believed her, and because I did, I went into surgery with no worries and came out of it successfully.  She was correct to be confident as well.  She had, in fact, done hundreds of these surgeries, and for the last five years, this little piece of biotech (with its eight year battery!) has kept me alive without any real issues.  

Politicians, TV hucksters, and other con artists, on the other hand, may not know about the Confidence Heuristic but they sure know how to use it!  Speaking confidently and in absolute rather than nuanced terms is the hallmark of almost every political speech and all of the hours of editorial commentary masquerading as news shows.  Nuance is used to cast doubt on the other side's position while confidence is required to promote your own position.  
(Note:  This, coupled with Confirmation Bias and the Dunning-Kruger Effect, explains much of the internet.)
In other words, Gray Rhinos likely exists because of the way Gray Rhino communities of interest choose to talk about Gray Rhinos.  Measured tones, nuanced forecasts, and managed expectations are the language of science and (much of) academia.  Hyperbole, bold predictions, and showmanship generate the buzz, however.  

What to do if you find yourself working on a Gray Rhino problem?  Hiring a frontman to hype your rhino is likely excessive and can get you into real trouble (See Theranos and MIT Media Lab for a few cautionary tales).  That said, developing a relationship with the press, being able to explain your research in layman's terms, and celebrating the genuine "wins" in your field as they come along, seems to make sense.

Finally, if you do decide to go the frontman route (and remember, I don't recommend it), at least get a guy like this:

Monday, August 26, 2019

How To Think About The Future (Part 3--Why Are Questions About Things Outside Your Control So Difficult?)

I am writing a series of posts about how to think about the future.  In case you missed the first two parts, you can find them here:

Part 1--Questions About Questions
Part 2--What Do You Control

These posts represent my own views and do not represent the official policy or positions of the US Army or the War College, where I currently work.

*******************

Former Director of the CIA, Mike Hayden, likes to tell this story:

"Some months ago, I met with a small group of investment bankers and one of them asked me, 'On a scale of 1 to 10, how good is our intelligence today?'" recalled Hayden. "I said the first thing to understand is that anything above 7 isn't on our scale. If we're at 8, 9, or 10, we're not in the realm of intelligence—no one is asking us the questions that can yield such confidence. We only get the hard sliders on the corner of the plate. Our profession deals with subjects that are inherently ambiguous, and often deliberately hidden. Even when we're at the top of our game, we can offer policymakers insight, we can provide context, and we can give them a clearer picture of the issue at hand, but we cannot claim certainty for our judgments." (Italics mine)
I think it is important to note that the main reason Director Hayden cited for the Agency's "batting average" was not politics or funding or even a hostile operating environment.  No.  The #1 reason was the difficulty of the questions. 

Understanding why some questions are more difficult than others is incredibly important.  Difficult questions typically demand more resources--and have more consequences.  What makes it particularly interesting is that we all have an innate sense of when a question is difficult and when it is not, but we don't really understand why.  I have written about this elsewhere (here and here and here, for example), and may have become a bit like the man in the  "What makes soup, soup?" video below...




No one, however, to my knowledge, has solved the problem of reliably categorizing questions by difficulty.

I have a hypothesis, however.

I think that the AI guys might have taken a big step towards cracking the code.  When I first heard about how AI researchers categorize AI tasks by difficulty, I thought there might be some useful thinking there.  That was way back in 2011, though.  As I went looking for updates for this series of posts, I got really excited.  There has been a ton of good work done in this area (no surprise there), and I think that Russel and Norvig in their book, Artificial Intelligence:  A Modern Approach, may have gotten even closer to what is, essentially, a working definition of question difficulty.

Let me be clear here.  The AI community did not set out to figure out why some questions are more difficult than others.  They were looking to categorize AI tasks by difficulty.  My sense, however, is that, in so doing, they have inadvertently shown a light on the more general question of question difficulty.  Here is the list of eight criteria they use to categorize task environments (the interpretation of their thinking in terms of questions is mine):
  • Fully observable vs. partially observable -- Questions about things that are hidden (or partially hidden) are more difficult than questions about things that are not.
  • Single agent vs. multi-agent -- Questions about things involving multiple people or organizations are more difficult than questions about a single person or organization.
  • Competitive vs. cooperative -- If someone is trying to stop you from getting an answer or is going to take the time to try to lead you to the wrong answer, it is a more difficult question.  Questions about enemies are inherently harder to answer than questions about allies.
  • Deterministic vs. stochastic -- Is it a question about something with fairly well-defined rules (like many engineering questions) or is it a question with a large degree of uncertainty in it (like questions about the feelings of a particular audience)?  How much randomness is in the environment?
  • Episodic vs. sequential -- Questions about things that happen over time are more difficult than questions about things that happen once.
  • Static vs. dynamic -- It is easier to answer questions about places where nothing moves than it is to answer questions about places where everything is moving.
  • Discrete vs. continuous -- Spaces that have boundaries, even notional or technical ones, make for easier questions than unbounded, "open world," spaces.
  • Known vs. unknown -- Questions where you don't know how anything works are much more difficult than questions where you have a pretty good sense of how things work.  
Why is this important to questions about the future?  Two reasons.  First, it is worth noting that most questions about the future, particularly those about things that are outside our control, fall at the harder rather than easier end of each of these criteria.  Second, understanding the specific reasons why these questions are hard also gives clues as to how to make them easier to answer.  

There is one more important reason why questions can be difficult.  It doesn't come from AI research.  It comes from the person (or organization) asking the question.  All too often, people either don't ask the "real" question they want answered or are incredibly unclear in the way they phrase their questions.  If you want some solutions to these problems, I suggest you look here, here and here.  

I was a big kid who grew up in a small town.  I only played Little League ball one year, but I had a .700 batting average.  Even when I was at my best physical condition as an adult, however, I doubt that I could hit a foul tip off a major league pitcher.  Hayden is right.  Meaningful questions about things outside your control are Major League questions, hard sliders on the corner of the plate.  Understanding that, and understanding what makes these questions so challenging, is a necessary precondition to taking the next step--answering them.

Next:  How Should We Think About Answers?  

Friday, August 16, 2019

How To Think About The Future (Part 2 - What Do You Control?)

Click on the image above to see the full mindmap.

I am writing a series of posts about how to think about the future.  In case you missed Part 1, you can find it here:

How To Think About The Future (Part 1 -- Questions About Questions)

These posts represent my own views and do not represent the official policy or positions of the US Army or the War College, where I currently work.

****************

The great Stoic philosopher Epictetus wrote
"Work, therefore to be able to say to every harsh appearance, 'You are but an appearance, and not absolutely the thing you appear to be.' And then examine it by those rules which you have, and first, and chiefly, by this: whether it concerns the things which are in our own control, or those which are not; and, if it concerns anything not in our control, be prepared to say that it is nothing to you." (Italics mine)
There are good reasons to focus on questions about things you control.  Things you control you can understand or, at least, the data required to understand them is much easier to get.  Things you control you can also change (or change more easily).  Finally, you only get credit for the things you do with the things you control.  Few people get credit for just watching. 

Whole disciplines have been built around improving what you do with what you control.  MBA and Operations Research programs are both good examples of fields of study that focus mostly on improving decisions about how you use the resources under your control.  Indeed, focusing on the things you control is at the center of effectual reasoning, an exciting new take on entrepreneurship and innovation (for example, the entire crowdfunding/startup Quickstarter Project was built on the effectuation principles and are the reason it was as successful as it was).

On the other hand, another great thinker from the ancient world once wrote,
"If you know the enemy and know yourself, you need not fear the result of a hundred battles." Sun Tzu, The Art Of War
Sun Tzu went on to outline the exact impact of not thinking about things you don't control:  
"If you know yourself but not the enemy, for every victory gained you will also suffer a defeat." 
Things outside of your control are much more squishy than things under your control.  The data is often incomplete, and what is there is often unclear.  It is pretty normal for the info to be, as Clausewitz would say, "of doubtful character," and it is rarely structured in nice neat rows with data points helpfully organized with labelled columns.  Finally, in an adversarial environment at least, you have to assume that at least some of the info you do have is deceptive--that it has been put there intentionally by your enemy or competitor to put you off the track.

People frequently run from questions about things that are outside of their control.  The nature of the info available can often make these kinds of questions seem unresolvable, that no amount of thinking can lead to any greater clarity.

This is a mistake.  

Inevitably, in order to move forward with the things you do control, you have to come to some conclusions about the things you do not control.  A country's military looks very different if it expects the enemy to attack by sea vs. by land.  A company's marketing plan looks very different if it thinks its competitor will be first to market with a new type of product or if it will not.  Your negotiating strategy with a potential buyer of your house depends very much on whether you think the market in your area is hot or not.

The US military has a saying:  "Intelligence leads operations."  This is a shorthand way of driving home the point that your understanding of your environment, of what is happening around you, of the things outside of your control, determines what you do with the things under your control.  Whether you do this analysis in a structured, formal way or just go with your gut instinct, you always come to conclusions about your environment, about the things outside your control, before you act.  

Since you are going to do it anyway, wouldn't it be nice if there were some skills and tools you could learn to do it better?  It turns out that there are.  The last 20-30 years has seen an explosion in research about how to better understand the future for those things outside of our control.

More importantly, learning these skills and tools can probably help you understand things under your control better as well.  Things under your control often come with the same kinds of squishy data normally associated with things outside your control.  The opposite is much less likely to be true.  

Much of the rest of this series will focus on these tools and thinking skills, but first, we need to dig more deeply into the nature of the questions we ask about things outside our control and precisely why those questions are so difficult to answer.

(Next:  Why Are Questions About Things Outside Your Control So Difficult?)

Tuesday, July 30, 2019

How To Think About The Future (Part 1 -- Questions About Questions)

We don't think about the future; we worry about it.


Whether it's killer robots or social media or zero-day exploits, we love to rub our preferred, future-infused worry stone between our thumb and finger until it is either a thing of shining beauty or the death of us all (and sometimes both).  

This is not a useful approach.

Worry is the antithesis of thinking.  Worry is all about jumping to the first and usually the worst possible conclusion.  It induces stress.  It narrows your focus.  It shuts down the very faculties you need to think through a problem.  Worry starts with answers; thinking begins with questions.

What Are Your Questions?
“A prudent question is one-half of wisdom.”Francis Bacon
"The art of proposing a question must be held of higher value than solving it.”Georg Cantor
“If you do not know how to ask the right question, you discover nothing.”W. Edwards Deming
Given the importance of questions (and of asking the "right" ones), you would think that there would be more literature on the subject.  In fact, the question of questions is, in my experience, one of the great understudied areas.  A few years ago, Brian Manning and I took a stab at it and only managed to uncover how little we really know about how to think about, create, and evaluate questions.

For purposes of thinking about the future, however, I start with two broad categories to consider:  Speculative questions and meaningful questions.  

There is nothing wrong with a speculative question.  Wondering about the nature of things, musing on the interconnectedness of life, and even just staring off into space for a bit are time-honored ways to come up with new ideas and new answers.  We should question our assumptions, utilize methods like the Nominal Group Technique to leverage the wisdom of our collective conscious, and explore all of the other divergent thinking tools in our mental toolkits.  

Speculation does not come without risks, however.  For example, how many terrorist groups would like to strike inside the US?  Let's say 10.  How are they planning to do it?  Bombs, guns, drones, viruses, nukes?  Let's say we can come up with 10 ways they can attack.  Where will they strike?  One of the ten largest cities in the US?  Do the math--you already have 1000 possible combinations of who, what, and where.

How do we start to narrow this down?  Without some additional thinking strategies, we likely give in to cognitive biases like vividness and recency to narrow our focus.    Other aspects of the way our minds work--like working memory limitations--also get in the way.  Pretty soon, our minds, which like to be fast and certain even when they should be neither, have turned our 1 in 1000 possibility into a nice, shiny, new worry stone for us to fret over (and, of course, share on Facebook).

Meaningful questions are questions that are important to you--important to your plans, to your (or your organization's) success or failure.  Note that there are two criteria here.  First, meaningful questions are important.  Second, they are yours.  The answers to meaningful questions almost, by definition, have consequences.  The answers to these questions tend to compel decisions or, at least, further study.

It is entirely possible, however, to spend a lot of time on questions which are both of dubious relevance to you and are not particularly important.  The Brits have a lovely word for this, bikesheddingIt captures our willingness to argue for hours about what color to paint the bikeshed while ignoring much harder and more consequential questions.  Bikeshedding, in short, allows us to distract ourselves from our speculations and our worries and feel like we are still getting something done.


Next:  What do you control?