Monday, May 19, 2008

Saying One Thing And Doing Another: A Look Back At Nearly 60 Years Of Estimative Language (Original Research)

US News and World Reports has an interesting story about the current state of intelligence reform. According to the article, CIA Director, Mike Hayden, said,

  • "Some months ago, I met with a small group of investment bankers and one of them asked me, 'On a scale of 1 to 10, how good is our intelligence today?'" recalled Hayden. "I said the first thing to understand is that anything above 7 isn't on our scale. If we're at 8, 9, or 10, we're not in the realm of intelligence—no one is asking us the questions that can yield such confidence. We only get the hard sliders on the corner of the plate. Our profession deals with subjects that are inherently ambiguous, and often deliberately hidden. Even when we're at the top of our game, we can offer policymakers insight, we can provide context, and we can give them a clearer picture of the issue at hand, but we cannot claim certainty for our judgments."
(For those of you keeping score at home, Hayden said much the same thing last year during an interview with CSPAN...)

Frankly, I don't know anyone knowledgeable about the strengths and weaknesses of intelligence that doesn't agree with this statement. Certitude is impossible. That is what makes the chart below so darn interesting:


The chart is from Rachel Kesselman's recently completed thesis, Verbal Probability Expressions In National Intelligence Estimates: A Comprehensive Analysis Of Trends From The Fifties Through Post 9/11. The chart shows the number of times the word "will" has been used, in an estimative sense (e.g "X will happen"), in the Key Judgments of 120 National Intelligence Estimates (NIE) over the last 58 years (20 per decade) that she examined.

In fact, at 717 times, the word "will" was the single most commonly used estimative word, by a very large margin, in NIEs. Not only was it the single most commonly used word, it was also one of the most consistently used words across the decades (tests Rachel ran showed that the variances across the decades were not statistically significant).

So...if certitude is impossible, why does the Intelligence Community use "will" -- a word that reeks of certitude -- so often in its estimates? Such a result is absolutely inconsistent with statements, such as Hayden's above, made by virtually everyone who has ever jumped up to defend intelligence's predictive track record.

This was only one of the many fascinating results that came out of Rachel's exhaustive study of the words that analysts have used over the years to verbally express probabilities

Rachel's lit review, for example, makes for very interesting reading. She has done a thorough search of not only the intelligence but also the business, linguistics and other literatures in order to find out how other disciplines have dealt with the problem of "What do we mean when we say something is 'likely'..." She uncovered, for example, that, in medicine, words of estimative probability such as "likely", "remote" and "probably" have taken on more or less fixed meanings due primarily to outside intervention or, as she put it, "legal ramifications". Her comparative analysis of the results and approaches taken by these other disciplines is required reading for anyone in the Intelligence Community trying to understand how verbal expressions of probability are actually interpreted.

Another of my favorite charts is the one below:


This chart examines the use of the NIC's nine currently "approved" words of estimative probability (See page 5 of this document for additional discussion) across the decades. The NICs list only became final in the last several years so it is arguable whether this list of nine words really captures the breadth of estimative word usage across the decades. Rather, it would be arguable if this chart didn't make it crystal clear that the Intelligence Community has really relied on just two words, "probably" and "likely" to express its estimates of probabilities for the last 60 years. All other words are used rarely or not at all.

Based on her research of what works and what doesn't and which words seem to have the most consistent meanings to users, Rachel even offers her own list of estimative words along with their associated probabilities:


Rachel's work tracks well with my own examination of word usage in recent NIEs and with some of the findings in Mike Lyden's thesis on Accelerated Analysis, but her thesis really stands on its own and my brief description and summary of some of the highlights does not do it justice. It is a first-of-its-kind, longitudinal study of estimative word usage by the intelligence community and has contributed significantly to my own understanding of where the Intelligence Community has been over the last 58 years. I think readers of this blog will be more than a little interested in her results and recommendations as well.

Related Posts:
The Revolution Begins On Page Five...
Accelerated Analysis: A New And Promising Intelligence Process
What Do Words Of Estimative Probability Mean?

4 comments:

Anonymous said...

Your link to your own examination of word usage in NIEs is broken.

Kristan J. Wheaton said...

Many thanks!

It should be working now.

Kris

Ralph Hitchens said...

Interesting statistics. In my 20 years as an analyst, which included a few years in the 90s contributing to NIEs & even drafting a minor one, I always felt that we were obligated to provide -- as best we could -- clear-cut judgments to our customers. No wishy-washy, on the one hand/on the other hand. Even when issues were complex and the information muddled, we owed them our best guess. I think many if not most analysts I worked with had the same inclination. I'm not saying Hayden is wrong -- this business is full of murky issues with a lot of noise and few, weak signals. But we can go whistling in the dark, can't we?

Kristan J. Wheaton said...

Ralph,

I agree with your characterization of the mission. I, too, think that we are still obligated to provide, as best we can, clear cut judgements to the decisionmakers we support.

What I think I am talking about is the way in which we do that. Using words of certainty, such as "will" when we know that they are not correct -- that we have no such certainty -- does not seem to me to be the best practice.

To me, the best practice (the way to give decisionmakers our "best guess"), given all of the requirements and constraints on the IC, is to use an appropriate word of estimative probability coupled with a statement of analytic confidence.

Kris