Sunday, January 25, 2009

Evaluating Intelligence (Original Research)

(This is another in a series of posts that I refer to as “experimental scholarship” -- or using the medium of the internet and the vehicle of this blog as a way to put my research online for more or less real-time peer review. Earlier examples of this genre include: A Wiki Is Like A Room..., The Revolution Begins On Page 5, What Is Intelligence? and What Do Words Of Estimative Probability Mean?.

In addition, astute readers will note that some of what I write here I have previously discussed in other places, most notably in an article written with my long-time collaborator, Diane Chido, for Competitive Intelligence Magazine and in a chapter of our book on Structured Analysis Of Competing Hypotheses (written with Diane, Katrina Altman, Rick Seward and Jim Kelly). Diane and the others clearly deserve full credit for their contribution to this current iteration of my thinking on this topic.)


Evaluating intelligence is tricky.

Really tricky.

Sherman Kent, one of the foremost early thinkers regarding the analytic process in the US national security intelligence community wrote in 1976, “Few things are asked the estimator more often than "How good is your batting average?" No question could be more legitimate--and none could be harder to answer.” So difficult was the question that Kent reports not only the failure of a three year effort in the 50’s to establish the validity of various National Intelligence Estimates but also the immense relief among the analysts in the Office of National Estimates (forerunner of the National Intelligence Council) when the CIA “let the enterprise peter out.”

Unfortunately for intelligence professionals, the decisionmakers that intelligence supports have no such difficulty evaluating the intelligence they receive. They routinely and publicly find intelligence to be “wrong” or lacking in some significant respect. Abbot Smith, writing for Studies In Intelligence in 1969, cataloged many of these errors in On The Accuracy Of National Intelligence Estimates. The list of failures at the time included the development of the Soviet H-bomb, the Soviet invasions of Hungary and Czechoslovakia, the Cuban Missile Crisis and the Missile Gap. The Tet Offensive, the collapse of the Soviet Union and the Weapons of Mass Destruction fiasco in Iraq would soon be added to the list of widely recognized (at least by decisionmakers) “intelligence failures”.

Nor was the US the only intelligence community to suffer such indignities. The Soviets had their Operation RYAN, the Israelis their Yom Kippur War and the British their Falklands Island. In each case, after the fact, senior government officials, the press and ordinary citizens alike pinned the black rose of failure on their respective intelligence communities.

To be honest, in some cases, the intelligence organization in question deserved the criticism but, in many cases, it did not -- or at least not the full measure of fault it received. However, whether the blame was earned or not, in the aftermath of each of these cases, commissions were duly summoned, investigations into the causes of the failure examined, recommendations made and changes, to one degree or another, ratified regarding the way intelligence was to be done in the future.

While much of the record is still out of the public eye, I suspect it is safe to say that intelligence successes rarely received such lavish attention.

Why do intelligence professionals find intelligence so difficult, indeed impossible, to evaluate while decisionmakers do so routinely? Is there a practical model for thinking about the problem of evaluating intelligence? What are the logical consequences for both intelligence professionals and decisionmakers that derive from this model? Finally, is there a way to test the model using real world data?

I intend to attempt to answer all of these questions but first I need to tell you a story…

Tomorrow: A Tale Of Two Weathermen

No comments: