Tuesday, October 18, 2011

RFI: Should Intelligence Analysis Be More Like Competitive Diving?

Quick!  Which is more difficult:  A jackknife or three and a half somersaults from a tuck position? In case you are not familiar with these dives, you can see videos of both below.

Now, here is the more difficult question: How much more difficult is a 3.5 from the tuck than a jackknife?

The answer is about 2.3 times more difficult. How do I know this? Because I checked out the handy diving tables at FINA (the international organization that regulates diving). I'm no expert but my reading of the tables says that a 3.5 from the tuck is a dive with a 3 point difficulty while a forward dive from the pike position (a jackknife?) is a 1.3 point dive.

Note that the degree of difficulty is simply a multiplier for the actual score of the dive. It is theoretically possible that a perfect jackknife would beat a lousy 3.5 somersault.

Intelligence, right now, is all about scoring the dive. Degree of difficulty? Not so much. 

I am hoping to change that...

We spend a good bit of time in intelligence talking about forecasting accuracy and we should.  Saying accurate things about the future is arguably much more valuable to decisionmakers than saying accurate things about the present or past.  It is also inherently more difficult.

Even when we are trying to say accurate things about the future, though, some questions are easier to answer than others.  Quick!  Which is more difficult to answer:  Is there likely to be a war somewhere in the Middle East in the next 100 years or is there likely to be a war between Israel and Egypt within the next 24 months?  I am no Middle East expert but it seems to me that the first question is much easier than the second.  I am guessing that most readers of this blog would say the same thing.

Why?   What are the essential elements of a question that make it obviously more or less difficult to answer?  How do we generalize these criteria across all questions?

I am not the only person to recognize the inherent difficulties in different kinds of questions.  Michael Hayden, the former Director of the CIA and NSA, likes to tell this story:

"Some months ago, I met with a small group of investment bankers and one of them asked me, 'On a scale of 1 to 10, how good is our intelligence today?'  I said the first thing to understand is that anything above 7 isn't on our scale. If we're at 8, 9, or 10, we're not in the realm of intelligence—no one is asking us the questions that can yield such confidence. We only get the hard sliders on the corner of the plate."
Note that Hayden highlighted the degree of difficulty of the questions (not the difficulty of obtaining the information or the complications associated with supporting political appointees or the lack of area experts or anything else) as the reason for more realistic expectations for the intelligence community's analysts.

So...if degree of question difficulty is the missing half of the "evaluating intelligence" equation, shouldn't someone be working on a diving-like degree of  dfficulty table for intel analysts?

That is precisely what I, along with one of our top graduate students, Brian Manning, have set out to do this year.  This research question piqued our interest primarily because of our involvement in the DAGGRE Research Project (more on that soon). 

In that project, we are asking questions (lots of them) that all have to be resolvable.  That is, they have to all have an answer eventually ("Will Moammar Ghaddafi be President of Libya after 31 DEC 2011?" is a resolvable question -- he either will or he won't be president after that date). 

My concern was that this is not the way that most questions are actually asked by the decisionmakers that intel typically supports.  For example, I would expect that the Ghaddafi question would come at me in the form of "So, what is going to happen with Ghaddafi?"  A very different question and, intuitively, much more difficult to answer.

So far our research has turned up some interesting answers from the fields of linguistics, artificial intelligence and, from all places, marketing. We expect to find interesting answers in other fields (like philosophy) but have not yet. 

Our goal is to sort through this research and figure out if any of the existing answers to this "question about questions" makes any sense for intel professionals.  Alternatively, we might take elements from each answer and kludge them together into some steampunk-looking difficulty of question generator.  We just don't know at this point. 

What we are looking for is good ideas, in general, and, in particular, any real research into how to rank questions for difficulty.

The comments section is now open!