Now, here is the more difficult question: How much more difficult is a 3.5 from the tuck than a jackknife?
The answer is about 2.3 times more difficult. How do I know this? Because I checked out the handy diving tables at FINA (the international organization that regulates diving). I'm no expert but my reading of the tables says that a 3.5 from the tuck is a dive with a 3 point difficulty while a forward dive from the pike position (a jackknife?) is a 1.3 point dive.
Note that the degree of difficulty is simply a multiplier for the actual score of the dive. It is theoretically possible that a perfect jackknife would beat a lousy 3.5 somersault.
Intelligence, right now, is all about scoring the dive. Degree of difficulty? Not so much.
I am hoping to change that...
We spend a good bit of time in intelligence talking about forecasting accuracy and we should. Saying accurate things about the future is arguably much more valuable to decisionmakers than saying accurate things about the present or past. It is also inherently more difficult.
Even when we are trying to say accurate things about the future, though, some questions are easier to answer than others. Quick! Which is more difficult to answer: Is there likely to be a war somewhere in the Middle East in the next 100 years or is there likely to be a war between Israel and Egypt within the next 24 months? I am no Middle East expert but it seems to me that the first question is much easier than the second. I am guessing that most readers of this blog would say the same thing.
Why? What are the essential elements of a question that make it obviously more or less difficult to answer? How do we generalize these criteria across all questions?
I am not the only person to recognize the inherent difficulties in different kinds of questions. Michael Hayden, the former Director of the CIA and NSA, likes to tell this story:
"Some months ago, I met with a small group of investment bankers and one of them asked me, 'On a scale of 1 to 10, how good is our intelligence today?' I said the first thing to understand is that anything above 7 isn't on our scale. If we're at 8, 9, or 10, we're not in the realm of intelligence—no one is asking us the questions that can yield such confidence. We only get the hard sliders on the corner of the plate."Note that Hayden highlighted the degree of difficulty of the questions (not the difficulty of obtaining the information or the complications associated with supporting political appointees or the lack of area experts or anything else) as the reason for more realistic expectations for the intelligence community's analysts.
So...if degree of question difficulty is the missing half of the "evaluating intelligence" equation, shouldn't someone be working on a diving-like degree of dfficulty table for intel analysts?
That is precisely what I, along with one of our top graduate students, Brian Manning, have set out to do this year. This research question piqued our interest primarily because of our involvement in the DAGGRE Research Project (more on that soon).
In that project, we are asking questions (lots of them) that all have to be resolvable. That is, they have to all have an answer eventually ("Will Moammar Ghaddafi be President of Libya after 31 DEC 2011?" is a resolvable question -- he either will or he won't be president after that date).
My concern was that this is not the way that most questions are actually asked by the decisionmakers that intel typically supports. For example, I would expect that the Ghaddafi question would come at me in the form of "So, what is going to happen with Ghaddafi?" A very different question and, intuitively, much more difficult to answer.
So far our research has turned up some interesting answers from the fields of linguistics, artificial intelligence and, from all places, marketing. We expect to find interesting answers in other fields (like philosophy) but have not yet.
Our goal is to sort through this research and figure out if any of the existing answers to this "question about questions" makes any sense for intel professionals. Alternatively, we might take elements from each answer and kludge them together into some steampunk-looking difficulty of question generator. We just don't know at this point.
What we are looking for is good ideas, in general, and, in particular, any real research into how to rank questions for difficulty.
The comments section is now open!
I spent a great of time in the 1990s seeking to answer my commander's questions regarding "how we measure the value of intelligence." I recall wading through the various factors common to the doctrine and literature -- timely, accurate, complete -- before settling on the conclusion that the key element in the evaluation model was the "quality of the question" and that the decisionmaker asking the question was our final standard for measure. Good luck on your efforts to divine the taxonomy of the question -- it's there. Senior decisionmakers know how to ask the right questions with the right variables weighted and in place. Check out cognitive psychologists Klein & Asscociates' work regarding intuition in decisionmaking. I think you might discover the secret to asking good questions rests with our most successful decisionmakers and the mental algorithms they employ in shaping their requests.
ReplyDeleteReviewing the accuracy of intelligence estimates--like the infamous "Iran is not pursuing nuclear weapons" NIE--should lead to a meaningful quantitative measure of analytical output. That might not be "difficulty," but it may be more useful than difficulty.
ReplyDeleteSomething like "Accurate predictions/Total Predictions x 100" would give you an accuracy average.
Kent Clizbe
An important wait is "distance from target". Recently law enforcement had a standoff with a woman who had access to multiple firearms. Her husband was interviewed by the intelligence detective to try and figure out how dangerous she was to officers. The question was: "On a scale of one to ten how likely is your wife to hurt police officers" the husband responded "Two". The intelligence officer then returned to the incident commander with the information, saying "If she is a two she is a ten"
ReplyDeleteThe point of this is, the evaluation model needs to start with "distance from target" - The decision maker that wants to know "Is the Muslim Brotherhood going to be in a position of power in three years?" Has a much more clinical approach and time to react to information than the field commander that asks "What is the probability that individuals have booby trapped the facility we are going to enter?"
Err sorry "An important weight..." what can I say it's late.
ReplyDeleteTo all,
ReplyDeleteThanks for the input! If you have any other ideas or leads, please leave a comment.
Kris
"Will Moammar Ghaddafi be President of Libya after 31 DEC 2011?"
ReplyDeleteA resolvable question that's been resolved well before the date in question. :)
Take a look at IARPA BAA-11-11, Open Source Indicators. They lay out a methodology for scoring automatically-generated forecasts of "interesting" events. Might be a useful input to your process.
ReplyDelete