Monday, August 26, 2019

How To Think About The Future (Part 3--Why Are Questions About Things Outside Your Control So Difficult?)

I am writing a series of posts about how to think about the future.  In case you missed the first two parts, you can find them here:

Part 1--Questions About Questions
Part 2--What Do You Control

These posts represent my own views and do not represent the official policy or positions of the US Army or the War College, where I currently work.

*******************

Former Director of the CIA, Mike Hayden, likes to tell this story:

"Some months ago, I met with a small group of investment bankers and one of them asked me, 'On a scale of 1 to 10, how good is our intelligence today?'" recalled Hayden. "I said the first thing to understand is that anything above 7 isn't on our scale. If we're at 8, 9, or 10, we're not in the realm of intelligence—no one is asking us the questions that can yield such confidence. We only get the hard sliders on the corner of the plate. Our profession deals with subjects that are inherently ambiguous, and often deliberately hidden. Even when we're at the top of our game, we can offer policymakers insight, we can provide context, and we can give them a clearer picture of the issue at hand, but we cannot claim certainty for our judgments." (Italics mine)
I think it is important to note that the main reason Director Hayden cited for the Agency's "batting average" was not politics or funding or even a hostile operating environment.  No.  The #1 reason was the difficulty of the questions. 

Understanding why some questions are more difficult than others is incredibly important.  Difficult questions typically demand more resources--and have more consequences.  What makes it particularly interesting is that we all have an innate sense of when a question is difficult and when it is not, but we don't really understand why.  I have written about this elsewhere (here and here and here, for example), and may have become a bit like the man in the  "What makes soup, soup?" video below...




No one, however, to my knowledge, has solved the problem of reliably categorizing questions by difficulty.

I have a hypothesis, however.

I think that the AI guys might have taken a big step towards cracking the code.  When I first heard about how AI researchers categorize AI tasks by difficulty, I thought there might be some useful thinking there.  That was way back in 2011, though.  As I went looking for updates for this series of posts, I got really excited.  There has been a ton of good work done in this area (no surprise there), and I think that Russel and Norvig in their book, Artificial Intelligence:  A Modern Approach, may have gotten even closer to what is, essentially, a working definition of question difficulty.

Let me be clear here.  The AI community did not set out to figure out why some questions are more difficult than others.  They were looking to categorize AI tasks by difficulty.  My sense, however, is that, in so doing, they have inadvertently shown a light on the more general question of question difficulty.  Here is the list of eight criteria they use to categorize task environments (the interpretation of their thinking in terms of questions is mine):
  • Fully observable vs. partially observable -- Questions about things that are hidden (or partially hidden) are more difficult than questions about things that are not.
  • Single agent vs. multi-agent -- Questions about things involving multiple people or organizations are more difficult than questions about a single person or organization.
  • Competitive vs. cooperative -- If someone is trying to stop you from getting an answer or is going to take the time to try to lead you to the wrong answer, it is a more difficult question.  Questions about enemies are inherently harder to answer than questions about allies.
  • Deterministic vs. stochastic -- Is it a question about something with fairly well-defined rules (like many engineering questions) or is it a question with a large degree of uncertainty in it (like questions about the feelings of a particular audience)?  How much randomness is in the environment?
  • Episodic vs. sequential -- Questions about things that happen over time are more difficult than questions about things that happen once.
  • Static vs. dynamic -- It is easier to answer questions about places where nothing moves than it is to answer questions about places where everything is moving.
  • Discrete vs. continuous -- Spaces that have boundaries, even notional or technical ones, make for easier questions than unbounded, "open world," spaces.
  • Known vs. unknown -- Questions where you don't know how anything works are much more difficult than questions where you have a pretty good sense of how things work.  
Why is this important to questions about the future?  Two reasons.  First, it is worth noting that most questions about the future, particularly those about things that are outside our control, fall at the harder rather than easier end of each of these criteria.  Second, understanding the specific reasons why these questions are hard also gives clues as to how to make them easier to answer.  

There is one more important reason why questions can be difficult.  It doesn't come from AI research.  It comes from the person (or organization) asking the question.  All too often, people either don't ask the "real" question they want answered or are incredibly unclear in the way they phrase their questions.  If you want some solutions to these problems, I suggest you look here, here and here.  

I was a big kid who grew up in a small town.  I only played Little League ball one year, but I had a .700 batting average.  Even when I was at my best physical condition as an adult, however, I doubt that I could hit a foul tip off a major league pitcher.  Hayden is right.  Meaningful questions about things outside your control are Major League questions, hard sliders on the corner of the plate.  Understanding that, and understanding what makes these questions so challenging, is a necessary precondition to taking the next step--answering them.

Next:  How Should We Think About Answers?  

No comments: