Tuesday, January 27, 2009

Part 3 -- A Model For Evaluating Intelligence (Evaluating Intelligence)

Part 1 -- Introduction
Part 2 -- A Tale Of Two Weathermen

Clearly there is a need for a more sophisticated model for evaluating intelligence – one that takes not only the results into consideration but also the means by which the analyst arrived at those results. It is not enough to get the answer right; analysts must also “show their work” in order to demonstrate that they were not merely lucky.

For the purpose of this series of posts, I will refer to the results of the analysis -- the analytic estimate under consideration -- as the product of the analysis. I will call the means by which the analyst arrived at that estimate the process. Analysts, therefore, can be largely (more on this later) correct in their analytic estimate. In this case, I will define the product as true. Likewise, analysts can be largely incorrect in their analytic estimate in which case I will label the product false.

Just as important, however, is the process. If an analyst uses a flawed, invalid process (much like the bad weatherman used a rule proven to be wrong most of the time), then I would say the process is false. Likewise, if the analyst used a generally valid process, one which produced reasonably reliable results over time, then I would say the process was true or largely accurate and correct.

Note that these two spectra are independent of one another. It is entirely possible to have a true process and a false product (consider the story of the good weatherman). It is also possible to have a false process and a true product (such as with the story of the bad weatherman).

In fact, it is perhaps convenient to think of this model for evaluating intelligence in a small matrix, such as the one below:

There are a number of examples of each of these four basic combinations. For instance, consider the use of intelligence preparation of the battlefield in the execution of combat operations in the Mideast and elsewhere. Both the product and the process by which it was derived have proven to be accurate. On the other hand, statistical sampling of voters (polling) is unquestionably a true process but has, upon occasion, generated spectacularly incorrect results (see Truman v. Dewey…)

False processes abound. Reading horoscopes, tea leaves and goat entrails are all false processes which, every once in a while, turn out to be amazingly accurate. These same methods, however, are even more likely to be false in both process and product.

What are the consequences of this evaluative model? In the first place, it makes no sense to talk about intelligence being “right” or “wrong”. Such an appraisal is overly simplistic and omits critical evaluative information. Evaluators should be able to specify if they are talking about the intelligence product or process or both. Only at this level of detail does any evaluation of intelligence begin to make sense.

Second, with respect to which is more important, product or process, it is clear that process should receive the most attention. Errors in a single product might well result in poor decisions, but are generally easy to identify in retrospect if the process is valid. On the other hand, errors in the analytic process, which are much more difficult to detect, virtually guarantee a string of failures over time with only luck to save the unwitting analyst. This truism is particularly difficult for an angry public or a congressman on the warpath to remember in the wake of a costly “intelligence failure”. This makes it all the more important to embed this principle deeply in any system for evaluating intelligence from the start when, presumably, heads are cooler.

Finally, and most importantly, it makes no sense to evaluate intelligence in isolation – to examine only one case to determine how well an intelligence organization is functioning. Only by examining both product and process systematically over a series of cases does a pattern emerge that allows for appropriate corrective action, if necessary at all, to be taken.

Tomorrow: The Problems With Evaluating Product And Process

No comments: