(Note: I recently was asked to present a paper on my thoughts about re-defining the intelligence process and the implications of that redefinition on education, training and integration across the community at the US Intelligence Community's Geospatial Training Council's (CGTC) conference in Washington DC. For those familiar with my earlier work in the intelligence cycle and the damage it is causing, you will find this paper shorter and less about the Cycle and more about the alternative to it I am proposing (and the evidence to support the adoption of that alternative...). Enjoy!)
Abstract: Effective integration and information sharing within the intelligence community is not possible until the fundamental process of intelligence is re-imagined for the 21st Century. The current model, the Intelligence Cycle, developed in World War 2 and widely criticized, has outlived its useful life. In fact, it has become part of the problem. This paper abandons this sequential process that was appropriate for a slower and less information rich environment. Instead, a more streamlined parallel process is proposed. Accompanying this new vision of the intelligence process will be an analysis of data collected from over 130 real-world intelligence projects conducted using this model of the intelligence process and delivered to decisionmakers in the national security (including GEOINT), law enforcement and business sectors. Additionally, the training and education implications as well as the kinds of software and hardware systems necessary to support this new understanding of the process are discussed.
Part 1 -- Introduction
"We must begin by redefining the traditional linear intelligence cycle, which is more a manifestation of the bureaucratic structure of the intelligence community than a description of the intelligence exploitation process." -- Eliot Jardines, former head of the Open Source Center, in prepared testimony in front of Congress, 2005
"When it came time to start writing about intelligence, a practice I began in my later years at the CIA, I realized that there were serious problems with the intelligence cycle. It is really not a very good description of the ways in which the intelligence process works." Arthur Hulnick, "What's Wrong With The Intelligence Cycle", Strategic Intelligence, Vol. 1 (Loch Johnson, ed), 2007
"Although meant to be little more than a quick schematic presentation, the CIA diagram [of the intelligence cycle] misrepresents some aspects and misses many others." -- Mark Lowenthal, Intelligence: From Secrets to Policy (2nd Ed.,2003)
"Over the years, the intelligence cycle has become somewhat of a theological concept: No one questions its validity. Yet, when pressed, many intelligence officers admit that the intelligence process, 'really doesn't work that way.'" -- Robert Clark, Intelligence Analysis: A Target-centric Approach, 2010
Academics have noted it and professionals have confirmed it: Our current best depiction of the intelligence process, the so-called "intelligence cycle", is fatally flawed. Moreover, I believe these flaws have become so severe, so grievous, that continued adherence to and promotion of the cycle is actually counterproductive. In this paper I intend to briefly outline the main flaws in the intelligence cycle, to discuss how the continued use of the cycle hampers, indeed extinguishes, efforts to effectively integrate and share information and, finally, suggest an alternative process – a parallel process – that, if adopted, would transform intelligence training and education.
Despite its popularity, the history of the cycle is unclear. US army regulations published during WWI identify collection, collation and dissemination of military intelligence as essential duties of what was then called the Military Intelligence Division but there was no suggestion that these three functions happen in a sequence, much less in a cycle.
By 1926, military intelligence officers were recommending four distinct functions for tactical combat intelligence: Requirements, collection, "utilization" (i.e. analysis), and dissemination, though, again, there was no explicit mention of an intelligence cycle.
The first direct mention of the intelligence cycle (see image) is from the 1948 book, Intelligence Is For Commanders. Since that time, the cycle, as a model of how intelligence works, has become pervasive. A simple Google image search on the term, "Intelligence Cycle" rapidly gives one a sense of the wide variety of agencies, organizations and businesses that use some variant of the cycle.
The Google Image Search above highlights the first major criticism of the Intelligence Cycle: Which one is correct? In fact, an analysis of a variety of Intelligence Cycles from both within and from outside the intelligence community reveals significant differences often within a single organization (See chart below gathered from various official websites in 2011).
While there is some consistency (“collection”, for example, is mentioned in every variant of the cycle), these disparities have significant training and education implications that will likely manifest themselves as different agencies attempt to impose their own understanding of the process during joint operations. Different agencies teaching fundamentally different versions of the process will likewise seriously impact the systems designed to support analysts and operators within agencies. This, in turn, will likely make cross-agency integration and information sharing more difficult or even impossible.
The image also highlights the second major problem with the cycle: Where is the decisionmaker? None of the versions of the intelligence cycle listed above explicitly include or explain the role of the decisionmaker in the process. Few, in fact, include a specific feedback or evaluation step. From the standpoint of a junior professional in a training environment (particularly in a large organization such as the US National Security Intelligence Community where intelligence professionals are often both bureaucratically and geographically distant from the decisionmakers they support), this can create the impression that intelligence is a “self-licking ice-cream cone” – existing primarily for its own pleasure rather than as an important component of a decision support system.
Finally, and most damningly (and as virtually all intelligence professionals know): “It just doesn’t work that way.” The US military's Joint Staff Publication 2.0, Joint Intelligence (Page 1-5), describes modern intelligence as the antithesis of the sequential process imagined by the Cycle. Instead, intelligence is clearly described as fast-paced and interactive, with many activities taking place simultaneously (albeit with different levels of emphasis):
"In many situations, various intelligence operations occur almost simultaneously or may be bypassed altogether. For example, a request for imagery requires planning and direction activities but may not involve new collection, processing, or exploitation. In this case, the imagery request could go directly to a production facility where previously collected and exploited imagery is reviewed to determine if it will satisfy the request. Likewise, during processing and exploitation, relevant information may be disseminated directly to the user without first undergoing detailed all-source analysis and intelligence production. Significant unanalyzed operational information and critical intelligence should be simultaneously available to both the commander (for time-sensitive decision-making) and to the all source intelligence analyst (for the production and dissemination of intelligence assessments and estimates). Additionally, the activities within each type of intelligence operation are conducted continuously and in conjunction with activities in each intelligence operation category. For example, intelligence planning (IP) occurs continuously while intelligence collection and production plans are updated as a result of previous requirements being satisfied and new requirements being identified. New requirements are typically identified through analysis and production and prioritized dynamically during the conduct of operations or through joint operation planning.”
The training and education implications of this kind of disconnect between the real-world of intelligence and the process as taught in the classroom, between practice and theory, are both severe and negative.
At one end of the spectrum it is as simple as a violation of the long-term military principle of “Train as you will fight”. Indeed it is only questionable as to which approach will be more counterproductive: Forcing students of intelligence to learn the Cycle only to realize after graduation and on their own that it is unrealistic or throwing a slide of the Cycle up on the projector only to have an experienced instructor indicate that “This is what you have to learn but this isn’t the way it really works.” Both scenarios regularly take place within the training circles of the intelligence community.
At the other end of the spectrum, the damage is much more nuanced and systemic. Specifically, intelligence professionals aren’t just undermining their own training, they are miscommunicating to those outside the community as well. The effects of this may seem manageable, even trivial, to some but imagine a software engineer trying to design a product to support intelligence operations. This individual will know nothing but the Cycle, will take this as an accurate description of the process, and design products accordingly.
In fact, it was the failure of these kinds of software projects to gain traction within the Intelligence Community that led Georgia Tech visual analytics researcher Youn-ah Kang and her advisor, Dr. John Stasko, to undertake an in-depth, longitudinal field study to determine how, exactly, intelligence professionals did what they did. While all of the results of their study are both interesting and relevant, the key misconception they identified is that “Intelligence analysis is about finding an answer to a problem via a sequential process.” In turn, the failure to recognize this misconception earlier resulted in a failure of many of the tools they and others had created. In short, as Kang and Stasko noted, “Many visual analytics tools thus support specific states only (e.g., shoebox and evidence file, evidence marshalling, foraging), and often they do not blend into the entire process of intelligence analysis.
Next: Part 2 -- The Mercyhurst Model