Part 1 -- Introduction
Part 2 -- A Tale Of Two Weathermen
Part 3 -- A Model For Evaluating Intelligence
Part 4 -- The Problem With Evaluating Intelligence Products
Part 5 -- The Problem With Evaluating The Intelligence Process
Part 6 -- The Decisionmaker's Perspective
Part 7 -- The Iraq WMD Estimate And Other Iraq Pre-War Assessments
Part 8 -- Batting Averages
The purpose of this series of posts was not to rationalize away, in a frenzy of legalese, the obvious failings of the Iraq WMD NIE. Under significant time pressure and operating with what the authors admitted was limited information on key questions, they failed to check their assumptions and saw all of the evidence as confirming an existing conceptual framework (While it should be noted that this conceptual framework was shared by virtually everyone else, the authors do not get a free pass on this either. Testing assumptions and understanding the dangers of overly rigid conceptual models is Intel Analysis 101).
On the other hand, if the focus of inquiry is just a bit broader, to include the two ICAs about Iraq completed by at least some of the same people, using many of the same processes, the picture becomes much brighter. When evaluators consider the three documents together, the analysts seem to track pretty well with historical norms and leadership expectations. Like the good weatherman in Part 2 of this series, it is difficult to see how they got it "wrong".
Moreover, the failure by evaluators to look at intelligence successes as well as intelligence failures and to examine them for where the analysts were actually good or bad (vs. where the analysts were merely lucky or unlucky), is a recipe for turmoil. Imagine a football coach who only watched game film when the team lost and ignored lessons from when the team won. This is clearly stupid but it is very close to what happens to the intelligence community. From the Hoover Commission to today, so-called intelligence failures get investigated while intelligence successes get, well, nothing.
The intelligence community, in the past, has done itself no favors for when the investigations do inevitably come, however. The lack of clarity and consistency in the estimative language used in these documents made coming to any sort of conclusion about the veracity of product or process far more difficult than it needed to be. While I do not expect that other investigators would come to startlingly different conclusions than mine, I would expect there to be areas where we would disagree -- perhaps strongly -- due to different interpretations of the same language. This is not in the intelligence community's interest as it creates the impression that the analysts are "just guessing".
Finally, there appears to be one more lesson to be learned from an examination of these three documents. Beyond the scope of evaluating intelligence, it goes to the heart of what intelligence is and what role it serves in a policy debate.
In the days before the vote to go to war, the Iraq NIE clearly answered the question it had been asked, albeit in a predictable way (so predictable, in fact, that few in Washington bother to read it). The Iraq ICAs, on the other hand, come out in January, 2003, two months before the start of the war. They are generated in response to a request from the Director of Policy Planning at the State Department and are intended, as are all ICAs, for lower level policymakers. These reports quite accurately -- as it turns out -- predict the tremendous difficulties should the eventual solution (of the several available to the policymakers at the time) to the problem of Saddam's Hussein's WMDs be war.
What if all three documents had come out at the same time and had all been NIEs? There does not appear to be, from the record, any reason why they could not have been issued simultaneously. The Senate Subcommittee states on page 2 of its report that there was no special collection involved in the ICAs, that it was "not an issue well-suited to intelligence collection." The report went on to state, "Analysts based their judgments primarily on regional and country expertise, historical evidence and," significantly, in light of this series of posts, "analytic tradecraft." In short, open sources and sound analytic processes. Time was of the essence, of course, but it is clear from the record that the information necessary to write the reports was already in the analyst's heads.
It is hard to imagine that such a trio of documents would not have significantly altered the debate in Washington. The outcome might still have been war, but the ability of policymakers to dodge their fair share of he blame would have been severely limited. In the end, it is perhaps the best practice for intelligence to answer not only those questions it is asked but also those questions it should have been asked.
No comments:
Post a Comment