Showing posts with label Rachel Kesselman. Show all posts
Showing posts with label Rachel Kesselman. Show all posts

Monday, May 19, 2008

Saying One Thing And Doing Another: A Look Back At Nearly 60 Years Of Estimative Language (Original Research)

US News and World Reports has an interesting story about the current state of intelligence reform. According to the article, CIA Director, Mike Hayden, said,

  • "Some months ago, I met with a small group of investment bankers and one of them asked me, 'On a scale of 1 to 10, how good is our intelligence today?'" recalled Hayden. "I said the first thing to understand is that anything above 7 isn't on our scale. If we're at 8, 9, or 10, we're not in the realm of intelligence—no one is asking us the questions that can yield such confidence. We only get the hard sliders on the corner of the plate. Our profession deals with subjects that are inherently ambiguous, and often deliberately hidden. Even when we're at the top of our game, we can offer policymakers insight, we can provide context, and we can give them a clearer picture of the issue at hand, but we cannot claim certainty for our judgments."
(For those of you keeping score at home, Hayden said much the same thing last year during an interview with CSPAN...)

Frankly, I don't know anyone knowledgeable about the strengths and weaknesses of intelligence that doesn't agree with this statement. Certitude is impossible. That is what makes the chart below so darn interesting:


The chart is from Rachel Kesselman's recently completed thesis, Verbal Probability Expressions In National Intelligence Estimates: A Comprehensive Analysis Of Trends From The Fifties Through Post 9/11. The chart shows the number of times the word "will" has been used, in an estimative sense (e.g "X will happen"), in the Key Judgments of 120 National Intelligence Estimates (NIE) over the last 58 years (20 per decade) that she examined.

In fact, at 717 times, the word "will" was the single most commonly used estimative word, by a very large margin, in NIEs. Not only was it the single most commonly used word, it was also one of the most consistently used words across the decades (tests Rachel ran showed that the variances across the decades were not statistically significant).

So...if certitude is impossible, why does the Intelligence Community use "will" -- a word that reeks of certitude -- so often in its estimates? Such a result is absolutely inconsistent with statements, such as Hayden's above, made by virtually everyone who has ever jumped up to defend intelligence's predictive track record.

This was only one of the many fascinating results that came out of Rachel's exhaustive study of the words that analysts have used over the years to verbally express probabilities

Rachel's lit review, for example, makes for very interesting reading. She has done a thorough search of not only the intelligence but also the business, linguistics and other literatures in order to find out how other disciplines have dealt with the problem of "What do we mean when we say something is 'likely'..." She uncovered, for example, that, in medicine, words of estimative probability such as "likely", "remote" and "probably" have taken on more or less fixed meanings due primarily to outside intervention or, as she put it, "legal ramifications". Her comparative analysis of the results and approaches taken by these other disciplines is required reading for anyone in the Intelligence Community trying to understand how verbal expressions of probability are actually interpreted.

Another of my favorite charts is the one below:


This chart examines the use of the NIC's nine currently "approved" words of estimative probability (See page 5 of this document for additional discussion) across the decades. The NICs list only became final in the last several years so it is arguable whether this list of nine words really captures the breadth of estimative word usage across the decades. Rather, it would be arguable if this chart didn't make it crystal clear that the Intelligence Community has really relied on just two words, "probably" and "likely" to express its estimates of probabilities for the last 60 years. All other words are used rarely or not at all.

Based on her research of what works and what doesn't and which words seem to have the most consistent meanings to users, Rachel even offers her own list of estimative words along with their associated probabilities:


Rachel's work tracks well with my own examination of word usage in recent NIEs and with some of the findings in Mike Lyden's thesis on Accelerated Analysis, but her thesis really stands on its own and my brief description and summary of some of the highlights does not do it justice. It is a first-of-its-kind, longitudinal study of estimative word usage by the intelligence community and has contributed significantly to my own understanding of where the Intelligence Community has been over the last 58 years. I think readers of this blog will be more than a little interested in her results and recommendations as well.

Related Posts:
The Revolution Begins On Page Five...
Accelerated Analysis: A New And Promising Intelligence Process
What Do Words Of Estimative Probability Mean?

Wednesday, March 26, 2008

Off To The International Studies Association Meeting!

I am eagerly awaiting my flight to the ISA Conference out in San Francisco (nothing like a 0500 departure followed by a long layover in Detroit before taking a 5 hour flight to the west coast...). My students get time off for good behavior and I get to present my paper on "A Wiki Is Like A Room..." (Saturday, 1545 if you are attending).

Two of my former students (and thesis advisees) will also be presenting. Both have done some excellent research that is well worth hearing about. Josh Peterson did a good bit of research to determine what were the appropriate elements of analytic confidence for intelligence and then ran an experiment to test his hypotheses. Rachel Kesselman did a multi-decade content analysis of the Key Judgments from dozens of NIEs to determine if there were significant changes in the ways the NIC has been articulating its intelligence judgments over time. Both papers are available in the paper archive at the ISA Conference but they really don't do the theses (or the research) justice. If you are interested in what you see in the papers or in their presentations (Thursday, 1545 for both), do not hesitate to contact them directly.

I will be blogging again once I get to the conference. Until then I will leave you with this Jonathan Coulton song that just barely begins to capture the unspeakable horror that is modern air travel, Skymall:




Thursday, February 28, 2008

Part 2 -- To Kent And Beyond (What Do Words Of Estimative Probability Mean?)

Part 1 -- Introduction

The discussion of Words of Estimative Probability (WEPs) starts with Sherman Kent’s seminal essay on the topic but hardly ends there. Linguistics experts have done a large number of studies on what they refer to (among other things) as “verbal expressions of probability”, “verbally expressed uncertainties” or “verbal probability expressions”. Others, in the fields of finance, health (Thanks, Rob!) and meteorology have also wrestled with this question.

I am advising one of our graduate students, Rachel Kesselman, on her thesis which will address all these literatures at some length. She is scheduled to present her preliminary findings at the ISA conference at the end of March and will likely complete her thesis (which focuses on the historical use of WEPs in National Intelligence Estimates) sometime in May or June. I won’t steal her thunder, then, but suffice it to say that this is a well studied topic outside the IC.

Within the IC, though, there appears to be a limited number of studies on the topic. Steve Rieber presented his own paper on the meaning of WEPs a couple of years ago at the ISA conference. At the time, he cited only two studies as major research findings within the realm of intelligence analysis: One in Dick Heuer’s classic, The Psychology Of Intelligence Analysis, and one (at least part of the basis for Rieber's paper) from a study of Kent School analysts. In the study cited from Heuer, analysts gave a single numerical probability for each word. For example, one analyst might claim that the word “likely” suggests a 75% probability while another might claim that it suggests only a 60% probability. Kent School analysts, on the other hand, were asked to give a range of values for each word. The charts showing both results are below (Heuer's is on top and Rieber's is on bottom).



The conclusion from both studies was that the level of agreement was rough, to say the least. There was a distinct difference between words at either end of the spectrum (such as “highly unlikely” and “highly likely”) but differences between words that were closer together in meaning (such as “probably” and “likely”) hardly seemed to be differences at all.

Other writers have tried to more or less establish statistical meanings to the words by simply declaring that certain words have certain probabilistic meanings. Kent's own attempt fell much along these lines as does the recent attempt (Thanks, Ted!) by the authors of Joint Publication 2-0, "Joint Intelligence", Appendix A (published 22 JUN 07). The fundamental problem with dictating these intervals is that it ignores the considerable evidence (including the two studies cited above) suggesting that people don't think about these words in these rigid ways (The problems with the Joint Pub run even deeper as it unnecessarily confuses the ideas of probability and confidence and is, as a consequence, 180 degrees out from what the National Intelligence Council was promulgating at approximately the same time! All this argues, I might add, for a need for more research into intelligence theory and, in the interim, some standardized estimative language that reflects the current best practice.)

What is clear, however, is that decisionmakers want clarity and consistency in the language of intelligence estimates. One of our former grad students, Jen Wozny, did a very strong thesis on this subject a number of years ago (Available, unfortunately only through inter-library loan at Mercyhurst's Hammermill Library). She looked at what over 40 decisionmakers, from the national security, business and law enforcement fields, wanted from intelligence. Two of the items that consistently popped up were clarity and consistency in the language that intelligence analysts used to communicate the results of their analysis. Peter Butterfield, in a comment to yesterday's introductory post, indicated similar concerns on the part of his decisionmakers.

Tomorrow -- The Exercise And Its Learning Objectives