Showing posts with label intelligence cycle. Show all posts
Showing posts with label intelligence cycle. Show all posts

Monday, June 9, 2014

Thinking In Parallel (Part 2 - The Mercyhurst Model)

Part 1 -- Introduction

While a number of tweaks and modifications to the cycle have been proposed over the years , very few professionals or academics have recommended wholesale abandonment of this vision of the intelligence process.  

This is odd.  

Other fields routinely modify and improve their processes in order to remain more competitive or productive.  The US Army, for example, has gone through several major revisions to its combat doctrine over the last 30 years, from the Active Defense Doctrine of the 1970’s to the AirLand Battle Doctrine of the 80’s and 90’s to Network Centric Operations in the early part of the 21st Century.  The model of the intelligence process, the Intelligence Cycle, however, has largely remained the same throughout this period despite the criticisms leveled against it.  The best answers, then, to the questions, “What is the intelligence process?” and “What should the Intelligence process be?” remain open theoretical questions, ripe for examination.

There are common themes, however, that emerge from this discussion of process.  These themes dictate, in my mind, that a complete understanding of the intelligence process must always include both an understanding of intelligence's role in relationship to both operations and the decisionmaker and an understanding of how intelligence products are created.  Likewise, I believe that the process of creating intelligence is best visualized as a parallel rather than as a sequential process.  I call this the "Mercyhurst Model" and believe it is a better way to do intelligence.  More importantly, I think I have the evidence to back that statement up.  

The first of the common themes referenced above is that the center of the process should be an interactive relationship between operations, the decisionmaker and the intelligence unit.  It is very clear that the intelligence process cannot be viewed in a vacuum.  If it is correct to talk about an “intelligence process” on one side of the coin, it is equally important for intelligence professionals to realize that there is a operational process, just as large if not larger and equally important if not more so, on the other side and a decisionmaking process that includes both.

The operational and intelligence processes overlap in significant ways, particularly with respect to the purpose and the goals of the individual or organization they support.  The intelligence professional is, however, focused externally and attempts to answer questions such as “What is the enemy up to?” and “What are the threats and opportunities in my environment?”  The decisionmaking side of the coin is more focused on questions such as “How will we organize ourselves to take advantage of the opportunity or to mitigate the threat?” and “How do we optimize the use of our own resources to accomplish our objectives?”  In many ways, the fundamental intelligence question is “What are they likely to do?” and the decisionmaker’s question is “What are we going to do?”  The image below suggests this relationship graphically.



The second theme is that it should be from this shared vision of the organization’s purpose and goals that intelligence requirements “emerge”.  With few exceptions, there does not seem to be much concern among the various authors who have written about the intelligence process about where requirements come from.  While most acknowledge that they generally come from the decisionmakers or operators who have questions or need estimates to help them make decisions, it also seems to be appropriate for intelligence professionals to raise issues or provide information that was not specifically requested when relevant to the goals and purpose of the organization.  In short, there seems to be room for both “I need this” coming from a decisionmaker and for “I thought you would want to know this” coming from the intelligence professional as long as it is relevant to the organization’s goals and purposes.

Theoretically, at least, the shared vision of the goals and purpose of the organization should drive decisionmaker feedback as well.  The theoretical possibility of feedback, however, is regularly compared with the common perception of reality, at least within the US national security community, that feedback is ad hoc at best.  There, the intelligence professionals preparing the intelligence are oftentimes so distant from the decisionmakers they are supporting that feedback is a rare occurrence and, if it comes at all, is typically only when there has been a flaw in the analysis or products.  As former Deputy Director Of National Intelligence for Analysis, Thomas Fingar (among others), has noted, “There are only two possibilities: policy success and intelligence failure” suggesting that “bad” intelligence is often a convenient whipping boy for poor decisions while “good” intelligence rarely gets credit for the eventual decisionmaker successes.

It is questionable whether this perception of reality applies throughout the intelligence discipline or even within the broader national security community.  Particularly on a tactical level, where the intelligence professional often shares the same foxhole, as it were, with the decisionmaker, it becomes obvious relatively quickly how accurate and how useful the intelligence provided is to the operators.   While most intelligence professionals subscribe to the poor feedback theory, most intelligence professionals also have a story or two about how they were able to give analysis to decisionmakers and how that analysis made a real difference, a difference willingly acknowledged by that decisionmaker.  The key to this kind of feedback seems less related to the issue or to intelligence writ large and more related to how closely tied are the intelligence and decisionmaking functions.  The more distance between the two, the less feedback, unsurprisingly, there is likely to be.

The third theme, is that from the requirement also emerges a mental model in the mind of the intelligence professional regarding the kinds of information that the he or she needs in order to address the requirement.  This model, whether implicit or explicit, emerges as the intelligence professional thinks about how best to answer the question and is constructed in the mind of the intelligence professional based on previous knowledge and the professional’s understanding of the question.  

This mental model typically contains at least two kinds of information; information already known and information that needs to be gathered.  Analysts rarely start with a completely blank slate.  In fact, Phillip Tetlock has demonstrated that a relatively high level of general knowledge about the world significantly improves forecasting accuracy across any domain of knowledge, even highly specialized ones (Counter-intuitively, he also offers good evidence to suggest that high degrees of specialized knowledge, even within the domain under investigation does not add significantly to forecasting accuracy). 

The mental model is more than just an outline, however.  It is where biases and mental shortcuts are most likely to impact the analysis.  It is where divergent thinking strategies are most likely to benefit and where their opposites, convergent thinking strategies such as grouping, prioritizing and filtering, need to be most carefully applied.  One of the true benefits of this model over the traditional Intelligence Cycle is that it explicitly includes humans in the loop - both what they do well and what they don't.

Almost as soon as the requirement gains enough form to be answerable, however, and even if it continues to be modified as a result of an exchange or series of exchanges between the decisionmakers and the intelligence professionals, four processes, operating in parallel, start to take hold: The modeling process we just discussed, collection (in a broad sense) of additional relevant information, analysis of that information with the requirement in mind and early ideas about production (i.e. how the final product will look, feel and be disseminated in order to facilitate communicating the results to the decisionmaker).

The notional graphic below visualizes the relationship between these four factors over the life of an intelligence product.  Such a product might have a short suspense (or due date) as in the case of a crisis or a lengthier timeline, as in the case of most strategic reports, but the fundamental relationship between the four functions will remain the same.  All four begin almost immediately but, through the course of the project, the amount of time spent focused on each function will change, with each function dominating the overall process at some point.  The key, however, is that these four major functions operate in parallel rather than in sequence, with each factor informing and influencing the other three at any given point in the process.



A good example of how these four functions interrelate is your own internal dialogue when someone asks you a question.  Understanding the question is clearly the first part followed almost immediately by a usually unconscious realization of what it would take to answer the question along with a basic understanding of the form that answer needs to take.  You might recall information from memory but you also realize that there are certain facts you might need to check out before you answer the question.  If the question is more than a simple fact–based question, you would probably have to do at least some type of analysis before framing the answer in a form that would most effectively communicate your thoughts to the person asking the question.  You would likely speak differently to a child than you would to an adult, for example, and, if the question pertained to a sport, you would likely answer the question differently when speaking with a rabid fan than to a foreigner who knew nothing about that particular sport.

This model of the process concludes then where it started, back with the relationship between the decisionmaker, the intelligence professional and the goals and purposes of the organization.  The question here is not requirements, however, but feedback.  The intelligence products the intelligence unit produced were, ultimately, either useful or not.  The feedback that results from the execution of the intelligence process will impact, in many ways, the types of requirements put to the intelligence unit in the future, the methods and processes the unit will use to address those requirements and the way in which the decisionmaker will view future products.

This model envisions the intelligence process as one where everything, to one degree or another, is happening at once.  It starts with the primacy of the relationship between the intelligence professional and the decisionmakers those professionals support.  It broadens and redefines, however, those few generally agreed upon functions of the intelligence cycle but sees them as operating in parallel with each taking precedence in more or less predictable ways throughout the process.  This model, however, explicitly adds the creation and refinement of the mental model of the requirement created by the intelligence unit as an essential part of the process.  This combined approach captures the best of the old and new ways of thinking about the process of intelligence.  Does it, however, test well against the reality of intelligence as it is performed on real-world intelligence problems?

Part Three - Testing The Mercyhurst Model Against The Real World

Friday, June 6, 2014

Thinking In Parallel: A 21st Century Vision Of The Intelligence Process

(Note:  I recently was asked to present a paper on my thoughts about re-defining the intelligence process and the implications of that redefinition on education, training and integration across the community at the US Intelligence Community's Geospatial Training Council's (CGTC) conference in Washington DC.  For those familiar with my earlier work in the intelligence cycle and the damage it is causing, you will find this paper shorter and less about the Cycle and more about the alternative to it I am proposing (and the evidence to support the adoption of that alternative...).  Enjoy!)


Abstract:  Effective integration and information sharing within the intelligence community is not possible until the fundamental process of intelligence is re-imagined for the 21st Century.  The current model, the Intelligence Cycle, developed in World War 2 and widely criticized, has outlived its useful life.  In fact, it has become part of the problem.  This paper abandons this sequential process that was appropriate for a slower and less information rich environment.  Instead, a more streamlined parallel process is proposed.  Accompanying this new vision of the intelligence process will be an analysis of data collected from over 130 real-world intelligence projects conducted using this model of the intelligence process and delivered to decisionmakers in the national security (including GEOINT), law enforcement and business sectors.  Additionally, the training and education implications as well as the kinds of software and hardware systems necessary to support this new understanding of the process are discussed.

Part 1 -- Introduction

"We must begin by redefining the traditional linear intelligence cycle, which is more a manifestation of the bureaucratic structure of the intelligence community than a description of the intelligence exploitation process." -- Eliot Jardines, former head of the Open Source Center, in prepared testimony in front of Congress, 2005  
"When it came time to start writing about intelligence, a practice I began in my later years at the CIA, I realized that there were serious problems with the intelligence cycle.  It is really not a very good description of the ways in which the intelligence process works."  Arthur Hulnick, "What's Wrong With The Intelligence Cycle", Strategic Intelligence, Vol. 1 (Loch Johnson, ed), 2007
"Although meant to be little more than a quick schematic presentation, the CIA diagram [of the intelligence cycle] misrepresents some aspects and misses many others." -- Mark Lowenthal, Intelligence:  From Secrets to Policy (2nd Ed.,2003) 
"Over the years, the intelligence cycle has become somewhat of a theological concept:  No one questions its validity.  Yet, when pressed, many intelligence officers admit that the intelligence process, 'really doesn't work that way.'" -- Robert Clark, Intelligence Analysis:  A Target-centric Approach, 2010



Academics have noted it and professionals have confirmed it:  Our current best depiction of the intelligence process, the so-called "intelligence cycle", is fatally flawed.  Moreover, I believe these flaws have become so severe, so grievous, that continued adherence to and promotion of the cycle is actually counterproductive.  In this paper I intend to briefly outline the main flaws in the intelligence cycle, to discuss how the continued use of the cycle hampers, indeed extinguishes, efforts to effectively integrate and share information and, finally, suggest an alternative process – a parallel process – that, if adopted, would transform intelligence training and education.

*****

Despite its popularity, the history of the cycle is unclear.  US army regulations published during WWI identify collection, collation and dissemination of military intelligence as essential duties of what was then called the Military Intelligence Division but there was no suggestion that these three functions happen in a sequence, much less in a cycle.

By 1926, military intelligence officers were recommending four distinct functions for tactical combat intelligence:  Requirements, collection, "utilization" (i.e. analysis), and dissemination, though, again, there was no explicit mention of an intelligence cycle.

The first direct mention of the intelligence cycle (see image) is from the 1948 book, Intelligence Is For Commanders.  Since that time, the cycle, as a model of how intelligence works, has become pervasive.  A simple Google image search on the term, "Intelligence Cycle" rapidly gives one a sense of the wide variety of agencies, organizations and businesses that use some variant of the cycle.

The Google Image Search above highlights the first major criticism of the Intelligence Cycle:  Which one is correct?  In fact, an analysis of a variety of Intelligence Cycles from both within and from outside the intelligence community reveals significant differences often within a single organization (See chart below gathered from various official websites in 2011).
While there is some consistency (“collection”, for example, is mentioned in every variant of the cycle), these disparities have significant training and education implications that will likely manifest themselves as different agencies attempt to impose their own understanding of the process during joint operations.  Different agencies teaching fundamentally different versions of the process will likewise seriously impact the systems designed to support analysts and operators within agencies.  This, in turn, will likely make cross-agency integration and information sharing more difficult or even impossible.




The image also highlights the second major problem with the cycle:  Where is the decisionmaker?  None of the versions of the intelligence cycle listed above explicitly include or explain the role of the decisionmaker in the process.  Few, in fact, include a specific feedback or evaluation step.  From the standpoint of a junior professional in a training environment (particularly in a large organization such as the US National Security Intelligence Community where intelligence professionals are often both bureaucratically and geographically distant from the decisionmakers they support), this can create the impression that intelligence is a “self-licking ice-cream cone” – existing primarily for its own pleasure rather than as an important component of a decision support system.

Finally, and most damningly (and as virtually all intelligence professionals know):  “It just doesn’t work that way.”  The US military's Joint Staff Publication 2.0, Joint Intelligence (Page 1-5), describes modern intelligence as the antithesis of the sequential process imagined by the Cycle.  Instead, intelligence is clearly described as fast-paced and interactive, with many activities taking place simultaneously (albeit with different levels of emphasis):

"In many situations, various intelligence operations occur almost simultaneously or may be bypassed altogether. For example, a request for imagery requires planning and direction activities but may not involve new collection, processing, or exploitation. In this case, the imagery request could go directly to a production facility where previously collected and exploited imagery is reviewed to determine if it will satisfy the request. Likewise, during processing and exploitation, relevant information may be disseminated directly to the user without first undergoing detailed all-source analysis and intelligence production. Significant unanalyzed operational information and critical intelligence should be simultaneously available to both the commander (for time-sensitive decision-making) and to the all source intelligence analyst (for the production and dissemination of intelligence assessments and estimates). Additionally, the activities within each type of intelligence operation are conducted continuously and in conjunction with activities in each intelligence operation category. For example, intelligence planning (IP) occurs continuously while intelligence collection and production plans are updated as a result of previous requirements being satisfied and new requirements being identified. New requirements are typically identified through analysis and production and prioritized dynamically during the conduct of operations or through joint operation planning.”

The training and education implications of this kind of disconnect between the real-world of intelligence and the process as taught in the classroom, between practice and theory, are both severe and negative.  

At one end of the spectrum it is as simple as a violation of the long-term military principle of “Train as you will fight”.  Indeed it is only questionable as to which approach will be more counterproductive:  Forcing students of intelligence to learn the Cycle only to realize after graduation and on their own that it is unrealistic or throwing a slide of the Cycle up on the projector only to have an experienced instructor indicate that “This is what you have to learn but this isn’t the way it really works.”  Both scenarios regularly take place within the training circles of the intelligence community.

At the other end of the spectrum, the damage is much more nuanced and systemic.  Specifically, intelligence professionals aren’t just undermining their own training, they are miscommunicating to those outside the community as well.  The effects of this may seem manageable, even trivial, to some but imagine a software engineer trying to design a product to support intelligence operations.  This individual will know nothing but the Cycle, will take this as an accurate description of the process, and design products accordingly.  

In fact, it was the failure of these kinds of software projects to gain traction within the Intelligence Community that led Georgia Tech visual analytics researcher Youn-ah Kang and her advisor, Dr. John Stasko, to undertake an in-depth, longitudinal field study to determine how, exactly, intelligence professionals did what they did.  While all of the results of their study are both interesting and relevant, the key misconception they identified is that “Intelligence analysis is about finding an answer to a problem via a sequential process.”  In turn, the failure to recognize this misconception earlier resulted in a failure of many of the tools they and others had created.  In short, as Kang and Stasko noted, “Many visual analytics tools thus support specific states only (e.g., shoebox and evidence file, evidence marshalling, foraging), and often they do not blend into the entire process of intelligence analysis.

Next:  Part 2 -- The Mercyhurst Model

Friday, March 23, 2012

Part 13 - The Whole Picture (Let's Kill The Intelligence Cycle)

Part 9 -- Departures From The Intelligence Cycle
Part 10 -- The New Intelligence Process 
Part 11 -- The New Intelligence Process:  The First Picture 
Part 12 -- The New Intelligence Process:  The Second Picture 



In the end, whether you accept this new model of the intelligence process or not, it is clear that the hoary image of the intelligence cycle needs to be put to rest.  Whether you would do that with full honors or, as I advocate, with the use of explosives, is irrelevant.  The cycle, as should be clear by now, needs to go.

To summarize, the cycle fails on three counts at least:  We cannot define what it is and what it isn't, it does not match the way intelligence actually works in the 21st Century and it does not help us explain our processes to the decisionmakers we support.  Efforts to fix these flaws have not worked and, furthermore, this is all widely recognized by those who have studied the role and impact of the cycle. 

In addition, the community of intelligence professionals (and I include academics who study intelligence in this group) will have to be the ones to lay the cycle to rest.  Not only does no one else care, but also the community of intelligence professionals has, as the WMD report noted, "an almost perfect record of resisting external recommendations." 

Yes, the interregnum will be difficult.  The decisionmakers we support, the professionals with whom we work and the students we teach will all ask -- and deserve -- good answers.  These answers will come slowly at first.  In fact, at the outset, we may only be able to "teach the controversy", as it were.

Hopefully, over time, though, the need for a new vision of the intelligence process will drive intellectual curiosity and, through the iterative process of creation and destruction, something more robust will emerge; an improved model that will stand the tests of the next 60 years.   While I have clearly already placed my bets in this regard, I will be happy if the community of intelligence professionals merely recognizes the need to move beyond its historical constraints, accepts this siren's call for what it is, plugs its ears and sails off in a new direction - any direction.

Because anything would be better than continuing to pretend that the world has not really changed since the 1940's.   Anything would be better than continuing to spend countless wasted hours explaining and attempting to justify something that should have been retired long ago.  Anything, in short, would be better than continuing to lie to ourselves.

Wednesday, March 21, 2012

Part 12 -- The New Intelligence Process: The Second Picture (Let's Kill The Intelligence Cycle)

Part 9 -- Departures From The Intelligence Cycle
Part 10 -- The New Intelligence Process 
Part 11 -- The New Intelligence Process:  The First Picture


(Note:  I started this series of posts many months ago with the intent of completing it in short order.  Life, as it so often does, got in the way...  If you are new to the series or you have forgotten what the excitement was all about, I recommend beginning at the beginning.  For the rest of you, thank you for your patience!)


At the highest level, intelligence clearly supports the decisionmaking process.  Understanding this is a first step to understanding what drives intelligence requirements and what defines good intelligence products.  This is the message of the first picture.

But what about the details?  Broad context is fine as far as it goes, but how should the modern intelligence professional think about the process of getting intelligence done?  The second picture is designed to answer these questions.
The Second Picture

The single most important thing to notice about this image is that it imagines intelligence as a parallel rather than as a sequential process.  In this image of the process, there are four broad themes, or sub-processes, moving across time from a nebulous start to a fuzzy finish, with each theme rising to a high point in terms of emphasis at different points in the process.  Also intended by this image is the idea that each theme constantly reflects back and forth among the other three, influencing them as they each influence each other at every point in time.

Let me anticipate an initial objection to this picture -- that the intelligence process has a "start" and a "finish".  The intelligence function, to be sure, is an ongoing one and this was one of the implied lessons of the first picture.  Having made that point there, here I think it is important to focus on how intelligence products are actually generated.  In this respect, clearly, there is a point at which a question (an intelligence requirement) is asked.  It may be indistinct, poorly formed or otherwise unclear, but the focus of an intelligence effort does not exist in any meaningful way until there is a question that is, in some way, relevant to the decisionmaking process the intelligence unit supports.

Likewise, there is a finish.  It may take place in an elevator or in a formal brief, in a quick email or in a 50 page professionally printed and bound document, but answering those questions, i.e. the dissemination of the intelligence product, in whatever form, signifies the end of the process.  Yes, this process then begins immediately anew with new questions, and yes, there are always multiple questions being asked and answered simultaneously but neither observation invalidates the general model.

What of the sub-processes, though?  What are they and how do they relate to each other?  The four include mental modeling, collection of relevant information, analysis of that information and production (i.e. how the intelligence will be communicated to the decisionmakers).

Mental Modelling


Until intelligence becomes a process where machines exclusively speak only to other machines, the mental models carried around by intelligence professionals and the decisionmakers they support will be an inseparable part of the intelligence process.  While most intelligence professionals readily acknowledge the strengths and weaknesses of human cognition, one of the most important qualities, in my mind, of this model is that it embeds these strengths and weaknesses directly into the process and acknowledges the influence of the human condition on intelligence.

These mental models typically contain at least two kinds of information, information already known and information that needs to be gathered.  Analysts rarely start with a completely blank slate.  In fact, a relatively high level of general knowledge about the world has been demonstrated to significantly improve forecasting accuracy across any domain of knowledge, even highly specialized ones (Counter-intuitively, there is good evidence to suggest that high degrees of specialized knowledge, even within the domain under investigation does not add significantly to forecasting accuracy).   

The flip side of this coin is psychological bias, which has a way of leading analysts astray without them even being aware of it.  An extensive overview of these topics is beyond the scope of this post but it is safe to say that, whether implicit or explicit, these models, containing what we know, what we think we need to know and how our minds will process all this information, emerge as the intelligence professional thinks about how best to answer the question.   

Typically, at the outset of the intelligence process is is this modeling function that receives the most emphasis.  Figuring out how to think about the problem, understanding what kind of information needs to be collected and identifying key assumptions in both the questions and the model are necessary to some degree before the other functions can begin in earnest.  This is particularly true with a new or particularly complex requirement.  Furthermore, this modeling function is often informal or even implicit.  It is rare, in current practice, to see the mental model on which collection is planned and analysis conducted made explicit.  This is unfortunate since making the model explicit has proven, if done properly, to accelerate the other sub-processes, limit confusion within a team and produce more accurate forecasts.

Modeling should go on throughout the entire intelligence process, however.  As new information comes in or analysis gets produced, the model may well grow, shrink or morph as the concepts and the relationships between those concepts become more clear.  At some point (typically early) in the intelligence process, however, the emphasis shifts away from modeling and towards collecting, analyzing and producing.  While mental modeling doesn’t become unimportant, it does begin to lose importance as less time is devoted to modeling and more to the other three functions. 

Collection
 
Typically, the next sub-process to take precedence is collection.  Again, as with modeling, collection begins almost as soon as a rudimentary requirement forms in the mind of the intelligence professional.  People naturally begin to draw on their own memories and, if the question is complicated enough, begin to look for additional information to answer the question.  In more complex questions, where the information needs are clearly higher, the intelligence professional even comes up with a collection plan and tasks others to collect the information in order to help address the requirement. 

Collection, like modeling, never stops.  Intelligence professionals will continue to collect information relevant to the particular requirement right up to the day the final product is published.  In fact, collection on a particularly difficult problem (i.e. almost all of them) will often continue after publication.  Decisionmakers and analysts alike want to know if they were correct in their key assumptions, how accurate the final product was and all understand a need to continue to track particularly important requirements over time.  

All that said, collection does tend to lose importance relative to other functions over time.  Economists call this diminishing returns and it reflects a general rule that collection efforts, when considered across the entire spectrum of activities, from no knowledge about a subject to the current level of knowledge about a subject, typically add less and less genuinely new information over time.  Again, this is not to say that collection becomes unimportant, it is simply a reflection of the fact that other processes tend to increase in importance with respect to collection at some point in the process.

Analysis

The next sub-process to take precedence is analysis.  As with both modeling and collection, analysis begins almost immediately.  Tentative answers leap to mind and, in simple cases or where time is a severe constraint, these initial responses may have to do.  Analysis doesn’t really move to the forefront, however, until the requirement is understood and enough collection has taken place for the analyst to sense that adequate information exists to begin to go beyond tentative analyses and take a crack at answering the overall question or questions.

Analysis is where the raw material of intelligence, information, gets turned into products that address the decisionmaker’s requirements.  It is also the task most fraught with difficulties.  From the type of information used (typically unstructured) to the methods used to analyze this information to the form of the final product, analysts face enormous practical and psychological difficulties.  While the goal is clear – reduce the decisionmaker’s level of uncertainty – the best ways to get there are often unclear or rely on untested or poorly tested methods. 

Production

The final sub-process is production (which, for our purposes here, also includes dissemination).  As with all the other functions, it, too, begins on day one.  It is clearly, however, the least important function at the outset of the intelligence process.  Still, intelligence professionals do give some thought (and experienced professionals have learned to give more than a little thought) up front to the form and nature of the final product at the beginning of the process.  

Requirements typically come with an implied or explicit “deliverable” associated with them.  Is the answer, for example, to the intelligence requirement to be in the form of a briefing or is it to be a written report?  Knowing this at the outset helps the intelligence professionals tasked with answering the requirement to plan and to identify items along the way that will make the production of the final product easier.  For example, knowing that the final product is to be a briefing, gives the intelligence professionals associated with the project time to identify relevant graphics during the project rather than going back and finding such graphics at the last minute.  Likewise, if the final briefing is to be a written document, the time necessary to write and edit such a product might be substantial and this, in turn, would need to be factored into the planning process.

Production is an incredibly important but often under-appreciated function within the intelligence process.  If intelligence products are not accessible, i.e. packaged with the decisionmaker in mind, then they are unlikely to be read or used.  Under such circumstances, all of the hard work done by intelligence professionals up to this point is wasted.  On the other hand, there is a fine line between making a document or other type of intelligence report accessible and selling a particular position or way of thinking about a problem.  Intelligence professionals have to steer clear of those production methods and “tricks” that can come across as advertising or advocacy.  Production values should not compromise the goal of objectivity.

Likewise, some intelligence professionals associate high production values with pandering to the decisionmaker.  These professionals see adding multimedia, graphics, color and other design features to an intelligence product to be unnecessary “chrome” or “bling”.  These professionals, many from earlier generations, think that intelligence products “should stand on their own” and that the ease with which such “tricks” are used in modern production is not an excuse to deviate from time-honored traditions in production. 

The guiding principle here, of course, is not what the intelligence professional thinks but what the decisionmaker the intelligence professional is supporting thinks.  Some decisionmakers will, of course, prefer their intelligence products in a simple text-based format.  Others, including many business professionals, will want less text and more supporting data, including charts and graphs.  Some (and the demand for this may well increase in the future) will want their reports in a video format for use on their personal multimedia device. 

Intelligence professionals in general, then, will need to have a wider variety of production skills in the future and, while production concerns do not take precedence until closer to the end of the project, the need to think about them at some level permeates the entire project.

Next:  The Whole Picture

Monday, June 6, 2011

Part 10 -- The New Intelligence Process (Let's Kill The Intelligence Cycle)

 
All of the examples examined in the previous sections are really just hypotheses, or guesses, about how the intelligence process works (or should work).  All are based on anecdotal descriptions of the intelligence process as currently conducted solely within the US national security community.  

Few of the models attempted to broaden their applicability to either the business or law enforcement sectors.  Very few of these models are based on any sort of systematic, empirically based research so, even if they more or less accurately describe how intelligence is done today, it remains unclear if these models are the best that intelligence professionals can do. 

Other fields routinely modify and improve their processes in order to remain more competitive or productive.  The traditional model of the intelligence process, the intelligence cycle, has, however, largely remained the same since the 1940's despite the withering criticisms leveled against it and, in a few cases, attempts to completely overthrow it.  

While some might see the cycle's staying power as a sign of its strength, I prefer to see its lack of value to decisionmakers, its inability to shed little (if any) light on how intelligence is actually done and the various intelligence communities' failure to be able to even consistently define the cycle as hallmarks of what is little more than a very poor answer to the important -- and open -- theoretical question:  "What is the intelligence process?"

It is to resolving this question that I will devote the remaining posts in this series.

Next:  The First Picture

Thursday, June 2, 2011

Part 9 -- Departures From The Intelligence Cycle (Let's Kill The Intelligence Cycle)


Other authors have proposed, however, radically different versions of the intelligence process, overthrowing old notions in an attempt to more accurately describe how intelligence is done in the real world.  

The first of these attempts, by longtime academic and former CIA officer, Arthur Hulnick, was the Intelligence Matrix.  Hulnick believed that intelligence was better described in terms of a matrix (see image below).  For Hulnick there were three main activities, parts of which, in many cases, occurred at the same time.  These three “pillars” were collection, production, and support and services.  Hulnick's model, while capturing more of the functions of intelligence, does not seem to provide much guidance on how to actually do intelligence.

Peter Pirolli and Stuart Card of the Palo Alto Research Center also attempted to re-define the intelligence process (see image below).  This re-definition has gained some traction outside of the intelligence community.  While much more complex than the cycle and typically perceived as a departure from it, Pirolli and Card's sensemaking loop is still both very sequential and very circular -- with all the limits that implies.
Probably the most recent and most successful move away from the intelligence cycle, however, has been Robert Clark’s target-centric approach to intelligence analysis (see image below).  What makes Clark unique in many respects is that he is not merely attempting to describe the current intelligence process; he is attempting to examine how intelligence should be done.

Clark expressly rejects the intelligence cycle and advocates a more inclusive approach, one that includes all of the “stakeholders”, i.e. the individuals and organizations potentially affected by the intelligence produced.  Clark claims that, to include these stakeholders, “the cycle must be redefined, not for the convenience of implementation in a traditional hierarchy but so that the process can take full advantage of evolving information technology and handle complex problems.”

Clark calls this a “target-centric approach” because “the goal is to construct a shared picture of the target, from which all participants can extract the elements they need to do their jobs and to which all can contribute from their resources or knowledge.”  This approach does a very good job of describing a healthy relationship between the intelligence professional and the decisionmaker he or she supports.

This description of the way intelligence should work seems to fit well with at least some of the initiatives pursued by the US national security intelligence community.  The example of Intellipedia, discussed in a earlier post, seems particularly close to Clark’s vision of the way intelligence should work.  

What remains less clear is which came first.  Is Intellipedia a natural extension of Clark’s thinking or has Clark merely identified the value of a more inclusive, interactive, Intellipedia-like world?  Furthermore, beyond describing an ideal relationship between intelligence and decisionmakers, how does the intelligence product actually come about?  On this point, as with Hulnick, the model provides little guidance.

Next:  The New Intelligence Process