Tuesday, May 3, 2011

Evaluating Analytic Methods: What Counts? What Should Count? (Global Intelligence Forum)

About a week ago, I highlighted the upcoming Global Intelligence Forum and stated that one of the things I liked most about this conference was the opportunity, indeed, the inevitability of meeting interesting people working outside one's own area of expertise.

A really good example of this was Dr. Justine Schober, a pediatric urologist, who lectured the crowd last year on the the problems the medical profession had in analyzing  intersexuality (I'll let you look it up...).

I will be honest with you:  Justine's presentation was not what the crowd was expecting (...to say the least).

As I listened, however, to her description of the mistakes that doctors had made in this field, how bias and tradition had allowed these mistakes to continue for decades, and how much effort it had taken to begin to understand, analyze and rectify these errors, I realized just how much her profession and my profession have in common. 

Evaluating Medical Practice -- Pyramid of Evidence
One of her most useful slides was a simple pyramid (See picture to the right) that highlighted the kinds of evidence doctors use to validate their methods and approaches to various diseases and disorders.  Evidence at the bottom of the pyramid is obviously less valuable to doctors than evidence at the top, but all of this evidence counts in one way or another. 

This led me, in turn, to think about how we in intelligence evaluate analytic methods.  There appears to me to be two strong schools of thought.  In the first are such notables as Sherman Kent and other long time members of the intelligence community who write about how difficult it is to establish "batting averages" for intelligence estimates in general, much less for particular methods.

The other school of thought (of which I am a member) emphasizes rigorous testing of analytic methods under realistic conditions to see which are more likely to improve forecasting accuracy and under which conditions.  The recent National Research Council report, Intelligence Analysis For Tomorrow, seems to strongly support this point of view as well.

My colleague, Steve Marrin, has often pointed out in our discussions (and probably in print somewhere as well -- he is nothing if not prolific), that this is a false dichotomy, an approach that presents intelligence professionals with only extreme choices and so is not a very useful guide to action.

Justine's chart made me think the same thing.  In short, it seems foolish to focus exclusively at either the top or the bottom of the evidence hierarchy.  What makes more sense is to climb the damn pyramid! 

What do I mean?  Well, first, I think it is important to imagine what such a pyramid might look like for intelligence professionals.  You can take a look at my own first cut at it below.

Evaluating Intelligence Methods -- Pyramid of Evidence
Ideally, we should be able to select an analytic method and then match the relevant evidence, such as it is, with that method. This, in turn, allows us to know how much faith we should put in the method in question and what kind of studies might be most useful in either confirming or denying the value of the method and under what circumstances.

Examined from this perspective, there are many, many useful and simple kinds of studies intelligence professionals at all levels and in all areas of the intelligence discipline can do to make a difference in the field and, more importantly, many of these kinds of studies are tailor-made for the growing number of intel studies students in the US and elsewhere. 

No comments:

Post a Comment