Showing posts with label intelligence theory. Show all posts
Showing posts with label intelligence theory. Show all posts

Friday, June 6, 2014

Thinking In Parallel: A 21st Century Vision Of The Intelligence Process

(Note:  I recently was asked to present a paper on my thoughts about re-defining the intelligence process and the implications of that redefinition on education, training and integration across the community at the US Intelligence Community's Geospatial Training Council's (CGTC) conference in Washington DC.  For those familiar with my earlier work in the intelligence cycle and the damage it is causing, you will find this paper shorter and less about the Cycle and more about the alternative to it I am proposing (and the evidence to support the adoption of that alternative...).  Enjoy!)


Abstract:  Effective integration and information sharing within the intelligence community is not possible until the fundamental process of intelligence is re-imagined for the 21st Century.  The current model, the Intelligence Cycle, developed in World War 2 and widely criticized, has outlived its useful life.  In fact, it has become part of the problem.  This paper abandons this sequential process that was appropriate for a slower and less information rich environment.  Instead, a more streamlined parallel process is proposed.  Accompanying this new vision of the intelligence process will be an analysis of data collected from over 130 real-world intelligence projects conducted using this model of the intelligence process and delivered to decisionmakers in the national security (including GEOINT), law enforcement and business sectors.  Additionally, the training and education implications as well as the kinds of software and hardware systems necessary to support this new understanding of the process are discussed.

Part 1 -- Introduction

"We must begin by redefining the traditional linear intelligence cycle, which is more a manifestation of the bureaucratic structure of the intelligence community than a description of the intelligence exploitation process." -- Eliot Jardines, former head of the Open Source Center, in prepared testimony in front of Congress, 2005  
"When it came time to start writing about intelligence, a practice I began in my later years at the CIA, I realized that there were serious problems with the intelligence cycle.  It is really not a very good description of the ways in which the intelligence process works."  Arthur Hulnick, "What's Wrong With The Intelligence Cycle", Strategic Intelligence, Vol. 1 (Loch Johnson, ed), 2007
"Although meant to be little more than a quick schematic presentation, the CIA diagram [of the intelligence cycle] misrepresents some aspects and misses many others." -- Mark Lowenthal, Intelligence:  From Secrets to Policy (2nd Ed.,2003) 
"Over the years, the intelligence cycle has become somewhat of a theological concept:  No one questions its validity.  Yet, when pressed, many intelligence officers admit that the intelligence process, 'really doesn't work that way.'" -- Robert Clark, Intelligence Analysis:  A Target-centric Approach, 2010



Academics have noted it and professionals have confirmed it:  Our current best depiction of the intelligence process, the so-called "intelligence cycle", is fatally flawed.  Moreover, I believe these flaws have become so severe, so grievous, that continued adherence to and promotion of the cycle is actually counterproductive.  In this paper I intend to briefly outline the main flaws in the intelligence cycle, to discuss how the continued use of the cycle hampers, indeed extinguishes, efforts to effectively integrate and share information and, finally, suggest an alternative process – a parallel process – that, if adopted, would transform intelligence training and education.

*****

Despite its popularity, the history of the cycle is unclear.  US army regulations published during WWI identify collection, collation and dissemination of military intelligence as essential duties of what was then called the Military Intelligence Division but there was no suggestion that these three functions happen in a sequence, much less in a cycle.

By 1926, military intelligence officers were recommending four distinct functions for tactical combat intelligence:  Requirements, collection, "utilization" (i.e. analysis), and dissemination, though, again, there was no explicit mention of an intelligence cycle.

The first direct mention of the intelligence cycle (see image) is from the 1948 book, Intelligence Is For Commanders.  Since that time, the cycle, as a model of how intelligence works, has become pervasive.  A simple Google image search on the term, "Intelligence Cycle" rapidly gives one a sense of the wide variety of agencies, organizations and businesses that use some variant of the cycle.

The Google Image Search above highlights the first major criticism of the Intelligence Cycle:  Which one is correct?  In fact, an analysis of a variety of Intelligence Cycles from both within and from outside the intelligence community reveals significant differences often within a single organization (See chart below gathered from various official websites in 2011).
While there is some consistency (“collection”, for example, is mentioned in every variant of the cycle), these disparities have significant training and education implications that will likely manifest themselves as different agencies attempt to impose their own understanding of the process during joint operations.  Different agencies teaching fundamentally different versions of the process will likewise seriously impact the systems designed to support analysts and operators within agencies.  This, in turn, will likely make cross-agency integration and information sharing more difficult or even impossible.




The image also highlights the second major problem with the cycle:  Where is the decisionmaker?  None of the versions of the intelligence cycle listed above explicitly include or explain the role of the decisionmaker in the process.  Few, in fact, include a specific feedback or evaluation step.  From the standpoint of a junior professional in a training environment (particularly in a large organization such as the US National Security Intelligence Community where intelligence professionals are often both bureaucratically and geographically distant from the decisionmakers they support), this can create the impression that intelligence is a “self-licking ice-cream cone” – existing primarily for its own pleasure rather than as an important component of a decision support system.

Finally, and most damningly (and as virtually all intelligence professionals know):  “It just doesn’t work that way.”  The US military's Joint Staff Publication 2.0, Joint Intelligence (Page 1-5), describes modern intelligence as the antithesis of the sequential process imagined by the Cycle.  Instead, intelligence is clearly described as fast-paced and interactive, with many activities taking place simultaneously (albeit with different levels of emphasis):

"In many situations, various intelligence operations occur almost simultaneously or may be bypassed altogether. For example, a request for imagery requires planning and direction activities but may not involve new collection, processing, or exploitation. In this case, the imagery request could go directly to a production facility where previously collected and exploited imagery is reviewed to determine if it will satisfy the request. Likewise, during processing and exploitation, relevant information may be disseminated directly to the user without first undergoing detailed all-source analysis and intelligence production. Significant unanalyzed operational information and critical intelligence should be simultaneously available to both the commander (for time-sensitive decision-making) and to the all source intelligence analyst (for the production and dissemination of intelligence assessments and estimates). Additionally, the activities within each type of intelligence operation are conducted continuously and in conjunction with activities in each intelligence operation category. For example, intelligence planning (IP) occurs continuously while intelligence collection and production plans are updated as a result of previous requirements being satisfied and new requirements being identified. New requirements are typically identified through analysis and production and prioritized dynamically during the conduct of operations or through joint operation planning.”

The training and education implications of this kind of disconnect between the real-world of intelligence and the process as taught in the classroom, between practice and theory, are both severe and negative.  

At one end of the spectrum it is as simple as a violation of the long-term military principle of “Train as you will fight”.  Indeed it is only questionable as to which approach will be more counterproductive:  Forcing students of intelligence to learn the Cycle only to realize after graduation and on their own that it is unrealistic or throwing a slide of the Cycle up on the projector only to have an experienced instructor indicate that “This is what you have to learn but this isn’t the way it really works.”  Both scenarios regularly take place within the training circles of the intelligence community.

At the other end of the spectrum, the damage is much more nuanced and systemic.  Specifically, intelligence professionals aren’t just undermining their own training, they are miscommunicating to those outside the community as well.  The effects of this may seem manageable, even trivial, to some but imagine a software engineer trying to design a product to support intelligence operations.  This individual will know nothing but the Cycle, will take this as an accurate description of the process, and design products accordingly.  

In fact, it was the failure of these kinds of software projects to gain traction within the Intelligence Community that led Georgia Tech visual analytics researcher Youn-ah Kang and her advisor, Dr. John Stasko, to undertake an in-depth, longitudinal field study to determine how, exactly, intelligence professionals did what they did.  While all of the results of their study are both interesting and relevant, the key misconception they identified is that “Intelligence analysis is about finding an answer to a problem via a sequential process.”  In turn, the failure to recognize this misconception earlier resulted in a failure of many of the tools they and others had created.  In short, as Kang and Stasko noted, “Many visual analytics tools thus support specific states only (e.g., shoebox and evidence file, evidence marshalling, foraging), and often they do not blend into the entire process of intelligence analysis.

Next:  Part 2 -- The Mercyhurst Model

Tuesday, May 3, 2011

Evaluating Analytic Methods: What Counts? What Should Count? (Global Intelligence Forum)

About a week ago, I highlighted the upcoming Global Intelligence Forum and stated that one of the things I liked most about this conference was the opportunity, indeed, the inevitability of meeting interesting people working outside one's own area of expertise.

A really good example of this was Dr. Justine Schober, a pediatric urologist, who lectured the crowd last year on the the problems the medical profession had in analyzing  intersexuality (I'll let you look it up...).

I will be honest with you:  Justine's presentation was not what the crowd was expecting (...to say the least).

As I listened, however, to her description of the mistakes that doctors had made in this field, how bias and tradition had allowed these mistakes to continue for decades, and how much effort it had taken to begin to understand, analyze and rectify these errors, I realized just how much her profession and my profession have in common. 

Evaluating Medical Practice -- Pyramid of Evidence
One of her most useful slides was a simple pyramid (See picture to the right) that highlighted the kinds of evidence doctors use to validate their methods and approaches to various diseases and disorders.  Evidence at the bottom of the pyramid is obviously less valuable to doctors than evidence at the top, but all of this evidence counts in one way or another. 

This led me, in turn, to think about how we in intelligence evaluate analytic methods.  There appears to me to be two strong schools of thought.  In the first are such notables as Sherman Kent and other long time members of the intelligence community who write about how difficult it is to establish "batting averages" for intelligence estimates in general, much less for particular methods.

The other school of thought (of which I am a member) emphasizes rigorous testing of analytic methods under realistic conditions to see which are more likely to improve forecasting accuracy and under which conditions.  The recent National Research Council report, Intelligence Analysis For Tomorrow, seems to strongly support this point of view as well.

My colleague, Steve Marrin, has often pointed out in our discussions (and probably in print somewhere as well -- he is nothing if not prolific), that this is a false dichotomy, an approach that presents intelligence professionals with only extreme choices and so is not a very useful guide to action.

Justine's chart made me think the same thing.  In short, it seems foolish to focus exclusively at either the top or the bottom of the evidence hierarchy.  What makes more sense is to climb the damn pyramid! 

What do I mean?  Well, first, I think it is important to imagine what such a pyramid might look like for intelligence professionals.  You can take a look at my own first cut at it below.

Evaluating Intelligence Methods -- Pyramid of Evidence
Ideally, we should be able to select an analytic method and then match the relevant evidence, such as it is, with that method. This, in turn, allows us to know how much faith we should put in the method in question and what kind of studies might be most useful in either confirming or denying the value of the method and under what circumstances.

Examined from this perspective, there are many, many useful and simple kinds of studies intelligence professionals at all levels and in all areas of the intelligence discipline can do to make a difference in the field and, more importantly, many of these kinds of studies are tailor-made for the growing number of intel studies students in the US and elsewhere.