One of the great benefits of teaching at Mercyhurst is having the opportunity to work with a bunch of dedicated and very intelligent students. Another advantage is having enough students in the intelligence studies program to be able to conduct meaningful experiments.
Both factors come into play in the newly published thesis, Appropriate Factors To Consider When Assessing Analytic Confidence In Intelligence Analysis by one of our grad students, Josh Peterson (click here to download the full text. The full text will also be permanently available in the Mercyhurst Student Projects link list in the right hand column of this blog). Josh presented some of his research and findings at the recently completed ISA Conference but the full study is presented here for the first time.
I have written about the problems with the concept of analytic confidence before. Some analysts interpret analytic confidence psychologically (i.e. how the analyst feels about his or her analysis) while some use it estimatively (e.g. "I am confident that X will happen"). I have also argued that neither makes much sense in the context of modern best practices for intelligence analysis.
Analytic confidence really has to do with how well calibrated a particular estimate is. Both an experienced analyst and a beginner might say that "X is likely" but one would hope (vainly, perhaps) that the expert would have a tighter shot group around the target than the newbie.
Josh takes this as his starting point and asks, "What, then, are the relevant, legitimate elements of analytic confidence? What should analysts consider when they are asked to state how confident they are in their analysis?" Through a good bit of research (laid out in his lit review) he identified seven elements that seemed to legitimately underpin the concept of analytic confidence. They are: Source reliability, source corroboration, use of a structured method to do the analysis, analyst level of expertise, amount of collaboration between analysts, task complexity and time pressure.
Josh then created scenarios that were similar but, in one case, all of the elements mentioned above were extremely negative (low source reliability, high time pressure, etc) and, in another case, the elements were extremely positive (high source reliability, use of a structured method, etc.). Josh also established a third, control, group to help establish the validity of his experiment.
Josh used the students here at Mercyhurst as subjects for the studies. The students here get a good bit of real world experience in doing analysis both in our classes and in the internships and contract work we do so I think they are good proxies for entry-level analysts in the Intelligence Community.
Josh found that students could accurately identify high confidence from low confidence scenarios but that they were doing this largely through intuition. He suggested that we probably needed to update our curriculum in order to better teach our students those elements that should legitimately raise or lower an analyst's confidence in his or her work (suggestions we have already adopted).
Finally, Josh took his results and his research and combined them into a rubric (see below) that analysts can use to help score their confidence. Josh did not have time to test this rubric and the weighting in it represents his interpretation of the relative importance of each factor based on his read of the literature. Given how far he had already come, he wisely left these tasks to future researchers.
Analytic confidence is a tough nut to crack. It is hard to explain and even more difficult to research and test. Josh has taken a good first cut at it, though, and I think his work deserves some attention if only as a step upon which others can build.
Thursday, April 3, 2008
Analytic Confidence Defined...Finally! (Original Research)
Posted by Kristan J. Wheaton at 9:54 AM
Labels: analytic confidence, intelligence, intelligence analysis, Josh Peterson, Mercyhurst
Subscribe to:
Post Comments (Atom)
3 comments:
Forgot to include a link to the Google doc for downloading:
http://spreadsheets.google.com/ccc?key=p4S0p3JqUa7FhNB6y5r1gYw
Kris
Dated but reliable HUMINT impacts upon analytic confidence, according to DCIA Michael Hayden, who recently appeared on Meet The Press (30 Mar 08). He stressed how the Iraq War intelligence failure was caused in part by the momentum of such dated source reporting. The lack of confirmation through the acquisition of more recent intelligence should have been a red flag for analysts to diminish their confidence level, but this didn’t happen. Hayden said, “Even though our recent reporting had been very thin, we still kind of carried the old conclusions forward without frankly holding them up enough to the light in order to see whether or not they were still valid.” Hayden mentioned that his analysts have now learned to recalibrate analytic confidence in intelligence estimates in such cases, citing one example in particular where a CIA analyst told him that an estimate was downgraded “because the intelligence on which it was based has aged off.”
http://www.msnbc.msn.com/id/21134540/
vp/23867579#23867579
I would say that dating (i.e. the age of the info) impacts the reliability of HUMINT (and other types of information) which, in turn, impacts analytic confidence.
Post a Comment