Showing posts with label LKTIC. Show all posts
Showing posts with label LKTIC. Show all posts

Friday, March 23, 2012

Part 13 - The Whole Picture (Let's Kill The Intelligence Cycle)

Part 9 -- Departures From The Intelligence Cycle
Part 10 -- The New Intelligence Process 
Part 11 -- The New Intelligence Process:  The First Picture 
Part 12 -- The New Intelligence Process:  The Second Picture 



In the end, whether you accept this new model of the intelligence process or not, it is clear that the hoary image of the intelligence cycle needs to be put to rest.  Whether you would do that with full honors or, as I advocate, with the use of explosives, is irrelevant.  The cycle, as should be clear by now, needs to go.

To summarize, the cycle fails on three counts at least:  We cannot define what it is and what it isn't, it does not match the way intelligence actually works in the 21st Century and it does not help us explain our processes to the decisionmakers we support.  Efforts to fix these flaws have not worked and, furthermore, this is all widely recognized by those who have studied the role and impact of the cycle. 

In addition, the community of intelligence professionals (and I include academics who study intelligence in this group) will have to be the ones to lay the cycle to rest.  Not only does no one else care, but also the community of intelligence professionals has, as the WMD report noted, "an almost perfect record of resisting external recommendations." 

Yes, the interregnum will be difficult.  The decisionmakers we support, the professionals with whom we work and the students we teach will all ask -- and deserve -- good answers.  These answers will come slowly at first.  In fact, at the outset, we may only be able to "teach the controversy", as it were.

Hopefully, over time, though, the need for a new vision of the intelligence process will drive intellectual curiosity and, through the iterative process of creation and destruction, something more robust will emerge; an improved model that will stand the tests of the next 60 years.   While I have clearly already placed my bets in this regard, I will be happy if the community of intelligence professionals merely recognizes the need to move beyond its historical constraints, accepts this siren's call for what it is, plugs its ears and sails off in a new direction - any direction.

Because anything would be better than continuing to pretend that the world has not really changed since the 1940's.   Anything would be better than continuing to spend countless wasted hours explaining and attempting to justify something that should have been retired long ago.  Anything, in short, would be better than continuing to lie to ourselves.

Wednesday, March 21, 2012

Part 12 -- The New Intelligence Process: The Second Picture (Let's Kill The Intelligence Cycle)

Part 9 -- Departures From The Intelligence Cycle
Part 10 -- The New Intelligence Process 
Part 11 -- The New Intelligence Process:  The First Picture


(Note:  I started this series of posts many months ago with the intent of completing it in short order.  Life, as it so often does, got in the way...  If you are new to the series or you have forgotten what the excitement was all about, I recommend beginning at the beginning.  For the rest of you, thank you for your patience!)


At the highest level, intelligence clearly supports the decisionmaking process.  Understanding this is a first step to understanding what drives intelligence requirements and what defines good intelligence products.  This is the message of the first picture.

But what about the details?  Broad context is fine as far as it goes, but how should the modern intelligence professional think about the process of getting intelligence done?  The second picture is designed to answer these questions.
The Second Picture

The single most important thing to notice about this image is that it imagines intelligence as a parallel rather than as a sequential process.  In this image of the process, there are four broad themes, or sub-processes, moving across time from a nebulous start to a fuzzy finish, with each theme rising to a high point in terms of emphasis at different points in the process.  Also intended by this image is the idea that each theme constantly reflects back and forth among the other three, influencing them as they each influence each other at every point in time.

Let me anticipate an initial objection to this picture -- that the intelligence process has a "start" and a "finish".  The intelligence function, to be sure, is an ongoing one and this was one of the implied lessons of the first picture.  Having made that point there, here I think it is important to focus on how intelligence products are actually generated.  In this respect, clearly, there is a point at which a question (an intelligence requirement) is asked.  It may be indistinct, poorly formed or otherwise unclear, but the focus of an intelligence effort does not exist in any meaningful way until there is a question that is, in some way, relevant to the decisionmaking process the intelligence unit supports.

Likewise, there is a finish.  It may take place in an elevator or in a formal brief, in a quick email or in a 50 page professionally printed and bound document, but answering those questions, i.e. the dissemination of the intelligence product, in whatever form, signifies the end of the process.  Yes, this process then begins immediately anew with new questions, and yes, there are always multiple questions being asked and answered simultaneously but neither observation invalidates the general model.

What of the sub-processes, though?  What are they and how do they relate to each other?  The four include mental modeling, collection of relevant information, analysis of that information and production (i.e. how the intelligence will be communicated to the decisionmakers).

Mental Modelling


Until intelligence becomes a process where machines exclusively speak only to other machines, the mental models carried around by intelligence professionals and the decisionmakers they support will be an inseparable part of the intelligence process.  While most intelligence professionals readily acknowledge the strengths and weaknesses of human cognition, one of the most important qualities, in my mind, of this model is that it embeds these strengths and weaknesses directly into the process and acknowledges the influence of the human condition on intelligence.

These mental models typically contain at least two kinds of information, information already known and information that needs to be gathered.  Analysts rarely start with a completely blank slate.  In fact, a relatively high level of general knowledge about the world has been demonstrated to significantly improve forecasting accuracy across any domain of knowledge, even highly specialized ones (Counter-intuitively, there is good evidence to suggest that high degrees of specialized knowledge, even within the domain under investigation does not add significantly to forecasting accuracy).   

The flip side of this coin is psychological bias, which has a way of leading analysts astray without them even being aware of it.  An extensive overview of these topics is beyond the scope of this post but it is safe to say that, whether implicit or explicit, these models, containing what we know, what we think we need to know and how our minds will process all this information, emerge as the intelligence professional thinks about how best to answer the question.   

Typically, at the outset of the intelligence process is is this modeling function that receives the most emphasis.  Figuring out how to think about the problem, understanding what kind of information needs to be collected and identifying key assumptions in both the questions and the model are necessary to some degree before the other functions can begin in earnest.  This is particularly true with a new or particularly complex requirement.  Furthermore, this modeling function is often informal or even implicit.  It is rare, in current practice, to see the mental model on which collection is planned and analysis conducted made explicit.  This is unfortunate since making the model explicit has proven, if done properly, to accelerate the other sub-processes, limit confusion within a team and produce more accurate forecasts.

Modeling should go on throughout the entire intelligence process, however.  As new information comes in or analysis gets produced, the model may well grow, shrink or morph as the concepts and the relationships between those concepts become more clear.  At some point (typically early) in the intelligence process, however, the emphasis shifts away from modeling and towards collecting, analyzing and producing.  While mental modeling doesn’t become unimportant, it does begin to lose importance as less time is devoted to modeling and more to the other three functions. 

Collection
 
Typically, the next sub-process to take precedence is collection.  Again, as with modeling, collection begins almost as soon as a rudimentary requirement forms in the mind of the intelligence professional.  People naturally begin to draw on their own memories and, if the question is complicated enough, begin to look for additional information to answer the question.  In more complex questions, where the information needs are clearly higher, the intelligence professional even comes up with a collection plan and tasks others to collect the information in order to help address the requirement. 

Collection, like modeling, never stops.  Intelligence professionals will continue to collect information relevant to the particular requirement right up to the day the final product is published.  In fact, collection on a particularly difficult problem (i.e. almost all of them) will often continue after publication.  Decisionmakers and analysts alike want to know if they were correct in their key assumptions, how accurate the final product was and all understand a need to continue to track particularly important requirements over time.  

All that said, collection does tend to lose importance relative to other functions over time.  Economists call this diminishing returns and it reflects a general rule that collection efforts, when considered across the entire spectrum of activities, from no knowledge about a subject to the current level of knowledge about a subject, typically add less and less genuinely new information over time.  Again, this is not to say that collection becomes unimportant, it is simply a reflection of the fact that other processes tend to increase in importance with respect to collection at some point in the process.

Analysis

The next sub-process to take precedence is analysis.  As with both modeling and collection, analysis begins almost immediately.  Tentative answers leap to mind and, in simple cases or where time is a severe constraint, these initial responses may have to do.  Analysis doesn’t really move to the forefront, however, until the requirement is understood and enough collection has taken place for the analyst to sense that adequate information exists to begin to go beyond tentative analyses and take a crack at answering the overall question or questions.

Analysis is where the raw material of intelligence, information, gets turned into products that address the decisionmaker’s requirements.  It is also the task most fraught with difficulties.  From the type of information used (typically unstructured) to the methods used to analyze this information to the form of the final product, analysts face enormous practical and psychological difficulties.  While the goal is clear – reduce the decisionmaker’s level of uncertainty – the best ways to get there are often unclear or rely on untested or poorly tested methods. 

Production

The final sub-process is production (which, for our purposes here, also includes dissemination).  As with all the other functions, it, too, begins on day one.  It is clearly, however, the least important function at the outset of the intelligence process.  Still, intelligence professionals do give some thought (and experienced professionals have learned to give more than a little thought) up front to the form and nature of the final product at the beginning of the process.  

Requirements typically come with an implied or explicit “deliverable” associated with them.  Is the answer, for example, to the intelligence requirement to be in the form of a briefing or is it to be a written report?  Knowing this at the outset helps the intelligence professionals tasked with answering the requirement to plan and to identify items along the way that will make the production of the final product easier.  For example, knowing that the final product is to be a briefing, gives the intelligence professionals associated with the project time to identify relevant graphics during the project rather than going back and finding such graphics at the last minute.  Likewise, if the final briefing is to be a written document, the time necessary to write and edit such a product might be substantial and this, in turn, would need to be factored into the planning process.

Production is an incredibly important but often under-appreciated function within the intelligence process.  If intelligence products are not accessible, i.e. packaged with the decisionmaker in mind, then they are unlikely to be read or used.  Under such circumstances, all of the hard work done by intelligence professionals up to this point is wasted.  On the other hand, there is a fine line between making a document or other type of intelligence report accessible and selling a particular position or way of thinking about a problem.  Intelligence professionals have to steer clear of those production methods and “tricks” that can come across as advertising or advocacy.  Production values should not compromise the goal of objectivity.

Likewise, some intelligence professionals associate high production values with pandering to the decisionmaker.  These professionals see adding multimedia, graphics, color and other design features to an intelligence product to be unnecessary “chrome” or “bling”.  These professionals, many from earlier generations, think that intelligence products “should stand on their own” and that the ease with which such “tricks” are used in modern production is not an excuse to deviate from time-honored traditions in production. 

The guiding principle here, of course, is not what the intelligence professional thinks but what the decisionmaker the intelligence professional is supporting thinks.  Some decisionmakers will, of course, prefer their intelligence products in a simple text-based format.  Others, including many business professionals, will want less text and more supporting data, including charts and graphs.  Some (and the demand for this may well increase in the future) will want their reports in a video format for use on their personal multimedia device. 

Intelligence professionals in general, then, will need to have a wider variety of production skills in the future and, while production concerns do not take precedence until closer to the end of the project, the need to think about them at some level permeates the entire project.

Next:  The Whole Picture

Monday, June 6, 2011

Part 10 -- The New Intelligence Process (Let's Kill The Intelligence Cycle)

 
All of the examples examined in the previous sections are really just hypotheses, or guesses, about how the intelligence process works (or should work).  All are based on anecdotal descriptions of the intelligence process as currently conducted solely within the US national security community.  

Few of the models attempted to broaden their applicability to either the business or law enforcement sectors.  Very few of these models are based on any sort of systematic, empirically based research so, even if they more or less accurately describe how intelligence is done today, it remains unclear if these models are the best that intelligence professionals can do. 

Other fields routinely modify and improve their processes in order to remain more competitive or productive.  The traditional model of the intelligence process, the intelligence cycle, has, however, largely remained the same since the 1940's despite the withering criticisms leveled against it and, in a few cases, attempts to completely overthrow it.  

While some might see the cycle's staying power as a sign of its strength, I prefer to see its lack of value to decisionmakers, its inability to shed little (if any) light on how intelligence is actually done and the various intelligence communities' failure to be able to even consistently define the cycle as hallmarks of what is little more than a very poor answer to the important -- and open -- theoretical question:  "What is the intelligence process?"

It is to resolving this question that I will devote the remaining posts in this series.

Next:  The First Picture

Thursday, June 2, 2011

Part 9 -- Departures From The Intelligence Cycle (Let's Kill The Intelligence Cycle)


Other authors have proposed, however, radically different versions of the intelligence process, overthrowing old notions in an attempt to more accurately describe how intelligence is done in the real world.  

The first of these attempts, by longtime academic and former CIA officer, Arthur Hulnick, was the Intelligence Matrix.  Hulnick believed that intelligence was better described in terms of a matrix (see image below).  For Hulnick there were three main activities, parts of which, in many cases, occurred at the same time.  These three “pillars” were collection, production, and support and services.  Hulnick's model, while capturing more of the functions of intelligence, does not seem to provide much guidance on how to actually do intelligence.

Peter Pirolli and Stuart Card of the Palo Alto Research Center also attempted to re-define the intelligence process (see image below).  This re-definition has gained some traction outside of the intelligence community.  While much more complex than the cycle and typically perceived as a departure from it, Pirolli and Card's sensemaking loop is still both very sequential and very circular -- with all the limits that implies.
Probably the most recent and most successful move away from the intelligence cycle, however, has been Robert Clark’s target-centric approach to intelligence analysis (see image below).  What makes Clark unique in many respects is that he is not merely attempting to describe the current intelligence process; he is attempting to examine how intelligence should be done.

Clark expressly rejects the intelligence cycle and advocates a more inclusive approach, one that includes all of the “stakeholders”, i.e. the individuals and organizations potentially affected by the intelligence produced.  Clark claims that, to include these stakeholders, “the cycle must be redefined, not for the convenience of implementation in a traditional hierarchy but so that the process can take full advantage of evolving information technology and handle complex problems.”

Clark calls this a “target-centric approach” because “the goal is to construct a shared picture of the target, from which all participants can extract the elements they need to do their jobs and to which all can contribute from their resources or knowledge.”  This approach does a very good job of describing a healthy relationship between the intelligence professional and the decisionmaker he or she supports.

This description of the way intelligence should work seems to fit well with at least some of the initiatives pursued by the US national security intelligence community.  The example of Intellipedia, discussed in a earlier post, seems particularly close to Clark’s vision of the way intelligence should work.  

What remains less clear is which came first.  Is Intellipedia a natural extension of Clark’s thinking or has Clark merely identified the value of a more inclusive, interactive, Intellipedia-like world?  Furthermore, beyond describing an ideal relationship between intelligence and decisionmakers, how does the intelligence product actually come about?  On this point, as with Hulnick, the model provides little guidance.

Next:  The New Intelligence Process

Wednesday, June 1, 2011

Part 8 -- Tweaking The Intelligence Cycle (Let's Kill The Intelligence Cycle)

A number of scholars and practitioners have attempted, over the years, to rectify the problems with the intelligence cycle.  While, from a theoretical standpoint, virtually all of these attempts have resulted in a more nuanced understanding of the intelligence process, none has caught on among intelligence professionals and none has been able to de-throne the intelligence cycle as the dominant image of how intelligence works.

These new schools of thought fall into two general patterns:  Those that are tweaking the intelligence cycle in order to bring it closer to reality and those that seek to overhaul the entire image of how intelligence works (which I will discuss tomorrow).

Several authors have sought to modify the intelligence cycle in order to create a more realistic image of how intelligence “really” works.  While some restructuring of the intelligence cycle is done within virtually every intelligence schoolhouse, the four authors most commonly discussed include Lisa Krizan, Gregory Treverton, Mark Lowenthal and Rob Johnston.  These authors seek to build upon the existing model in order to make it more realistic.

From:  Intelligence Essentials For Everyone
Krizan, in her 1999 monograph, Intelligence Essentials For Everyone provides a slightly restructured view of the Intelligence Cycle (see image to the right) and, while quoting Douglas Dearth, states “These labels, and the illustration ..., should not be interpreted to mean that intelligence is a uni-dimensional and unidirectional process. ‘In fact, the [process] is a multidimensional, multi-directional, and - most importantly - interactive and iteractive.’”

From:  Reshaping National Intelligence 
Treverton, in Reshaping National Intelligence In An Age Of Information, outlines a slightly more ambitious version of the cycle.  In this adaptation, Treverton seeks to more completely include the decisionmaker in the process.  You can see a version of Treverton's cycle to the right.

Lowenthal in his classic, Intelligence:  From Secrets To Policy, acknowledges the flaws of the traditional intelligence cycle which he calls “overly simple”.  His version, reproduced below, demonstrates “that at any stage in the process it is possible – and sometimes necessary – to go back to an earlier step.  Initial collection may prove unsatisfactory and may lead policymakers to change the requirements; processing and exploitation or analysis may reveal gaps, resulting in new collection requirements; consumers may change their needs and ask for more intelligence.  And, on occasion, intelligence officers may receive feedback.”  Lowenthal's revised model, more than any other, seems to me to capture that the intelligence process takes place in a time constrained environment.
From Intelligence:  From Secrets To Policy

Perhaps the most dramatic re-visioning of the intelligence cycle, however, comes from anthropologist Rob Johnston in his book, Analytic Culture In The US Intelligence Community.  Johnston spent a year studying the analytic culture of the CIA in the time frame immediately following the events of September 11, 2001.  

His unique viewpoint resulted in an equally unique rendition of the traditional intelligence cycle, this time from a systems perspective.  This complicated vision (reproduced below) includes “stocks” or accumulations of information; “flows” or certain types of activity; “converters” that change inputs to outputs and “connectors”, which tie all of the other parts together.  

While, according to Johnston, “the premise that underlies systems analysis as a basis for understanding phenomena is that the whole is greater than the sum of its parts”, the subsequent model does not seek to replace the intelligence cycle but only to describe it more accurately:  “The elements of the Intelligence Cycle are identified in terms of their relationship with each other, the flow of the process and the phenomena that influence the elements and the flow.”

From:  Analytic Culture In The US Intelligence Community
While each of these models recognizes and attempts to rectify one or more of the flaws inherent in the traditional intelligence cycle and each of the modified versions is a decided improvement on the original cycle, none of these models seeks to discard the fundamental vision of the intelligence process described by the cycle.  

Next:   Departures From The Intelligence Cycle

Tuesday, May 31, 2011

Part 7 -- Critiques Of The Cycle: Cycles, Cycles And More Damn Cycles (Let's Kill The Intelligence Cycle)


While intelligence professionals often tout the intelligence cycle as something unique, to experienced business, law enforcement and national security decisionmakers, the intelligence cycle looks like many other linear decisionmaking processes with which decisionmakers are already familiar.  

Moreover, this familiarity has bred a certain amount of contempt as all of these disciplines are wrestling with re-defining their own processes in light of 21st century technology and systems thinking.  In short, the intelligence cycle not only fails in its attempt to explain the intelligence process but also comes across as an archaic sales pitch to a decisionmaker who is typically all too familiar with the flaws of linear process models.
http://home.ubalt.edu/ntsbarsh/opre640/opre640.htm

Every military officer, policeman or business student who has attended even relatively low level training in their profession is familiar with a model of decisionmaking that typically includes defining the question, collecting information relevant to the question, analyzing alternatives or courses of action, making a recommendation and then communicating or executing the recommendation (see image to the right).  

This model, of course, bears a striking resemblance to the “intelligence cycle”; a resemblance that may fool the uninformed but is unlikely to pass unnoticed by the decisionmakers that intelligence supports.  These decisionmakers, who are never blank slates and rarely outright fools, are also unlikely to accept such a simplistic explanation of the process unless accepting such an explanation serves their own purposes or they simply don't care.

This, in turn, results in two negative consequences for intelligence.  First, decisionmakers will, at best, see intelligence as “nothing special”.  The process used appears, from their perspective, to be just a glorified decisionmaking process.  

At worst, however, decisionmakers will see the “intelligence cycle” as mere advertising puffery, a fancy way of talking about something which could, in their eyes, be defined much more simply using a linear process model (albeit an out-of-date one) with which they are already familiar.  Many private sector intelligence organizations have problems convincing the decisionmakers they support of the importance of intelligence.  Over emphasis on the value of the intelligence cycle, particularly when faced by an educated decisionmaker, might well be part of the problem.

More insidiously, however, such a perception clouds the true role of intelligence in the decisionmaking process.  Decisionmakers, trained in and used to working with the decisionmaking process, will look for intelligence professionals to provide the same kinds of outputs – recommendations – as their process does.  

Intelligence, however, is focused externally, on issues relevant to the success or failure of the organization but fundamentally outside that organization's control.  Intelligence does best when it focuses on estimating the capabilities and limitations of those external forces and poorly when it attempts to make recommendations to operators as the intelligence professional is generally less well informed than others in the organization about the capabilities and limitations of the parent entity. 

In short, because the intelligence cycle creates the impression in the minds of many decisionmakers (particularly those unfamiliar with intelligence but well -educated in their own operational arts), that intelligence is “just like what I do”, only with a different name, the value of intelligence is more difficult to explain to decisionmakers than it needs to be.

Furthermore, once the decisionmakers think they understand the nature of intelligence, the way that nature has been communicated to them predisposes them to ask questions of intelligence that the intelligence professional is poorly positioned to answer.

Next:  Tweaking The Intelligence Cycle

Friday, May 27, 2011

Part 6 -- Critiques Of The Cycle: The Intelligence Cycle Vs. Reality (Let's Kill The Intelligence Cycle)

Part 1 -- Let's Kill The Intelligence Cycle
Part 2 -- "We''ll Return To Our Regularly Scheduled Programming In Just A Minute..."
Part 3 -- The Disconnect Between Theory And Practice
Part 4 -- The "Traditional" Intelligence Cycle And Its History
Part 5 -- Critiques Of The Cycle:  Which Intelligence Cycle?
 
Were the lack of precision the only criticism of the intelligence cycle, it might be able to weather the storm. As suggested previously, there do appear to be general themes that are relevant, and the cycle’s continued existence suggests that its inconsistencies are outweighed, to some extent, by its simplicity.

Unfortunately, the second type of criticism typically leveled against the cycle is much more damning.  In fact, it is fatal.   Simply put, there is virtually no knowledgeable practitioner or theorist who claims that the cycle reflects, in any substantial way or in any sub-discipline, the reality of how intelligence is actually done.

Consider these quotes from some of the most authoritative voices in each of the three intelligence communities:

"When it came time to start writing about intelligence, a practice I began in my later years at the CIA, I realized that there were serious problems with the intelligence cycle.  It is really not a very good description of the ways in which the intelligence process works."  Arthur Hulnick, "What's Wrong With The Intelligence Cycle", Strategic Intelligence, Vol. 1 (Loch Johnson, ed), 2007.

"Although meant to be little more than a quick schematic presentation, the CIA diagram [of the intelligence cycle] misrepresents some aspects and misses many others." -- Mark Lowenthal, Intelligence:  From Secrets to Policy (2nd Ed.,2003)

"We must begin by redefining the traditional linear intelligence cycle, which is more a manifestation of the bureaucratic structure of the intelligence community than a description of the intelligence exploitation process." -- Eliot Jardines, former head of the Open Source Center, in prepared testimony in front of Congress, 2005.

"The traditional intelligence cycle has been described as an "ideal-type" process that will always be subject to the real constraints of time." -- Jerry Ratcliffe, Strategic Thinking In Criminal Intelligence, 2004

"The classic intelligence cycle is neat, easily displayed, and quickly understood.  The problem is that it doesn't really work that way.  It's too static, too rigid, with too much distance between leaders and intelligence professionals."  -- T.J. Waters, Hyperformance:  Using Competitive Intelligence For Better Strategy and Execution, 2010

"Over the years, the intelligence cycle has become somewhat of a theological concept:  No one questions its validity.  Yet, when pressed, many intelligence officers admit that the intelligence process, 'really doesn't work that way.'" -- Robert Clark, Intelligence Analysis:  A Target-centric Approach, 2010.
Once you start looking for them, it is easy to find detailed critiques of the intelligence cycle (and, please, don't hesitate to add your own).  The only argument that still seems worth debating is whether or not the cost of maintaining this flawed model of the process is worth the benefit (a question about which readers of this blog were almost evenly split).
In addition to the quotes above, my colleague, Steve Marrin, provided me with an interesting update shortly after I started this series.  According to him, the intelligence cycle was the subject of "vigorous discussion" at a 2005 RAND/ODNI Conference on intelligence theory and that this topic will also be the subject of a panel at the 2012 International Studies Association Conference.  For a carefully crafted and articulate dissection of the intelligence cycle, I don't think I could recommend a better article than Steve's own chapter, "Intelligence Analysis and Decision-making:  Methodological Challenges from the 2009 book, Intelligence Theory:  Key Questions and Debate).
Once again, themes emerge from the general discontent with the inadequacies of the intelligence cycle.  Many of these themes I will touch upon as I discuss alternatives to the intelligence cycle in later posts.  One theme, however, leaps off each page and tends to dominate the discussion:  The intelligence cycle is linear and intelligence, as practiced, is not. Tasks move from one part of the cycle to another like an assembly line, where parts are bolted on in a specific order to create a consistent product.

While this approach might be appropriate for early 20th century manufacturers, it doesn’t work with intelligence, where each product, ideally, contains information that is somehow unique. Consider, for example, this hypothetical dialogue between Mary, the CEO of Acme Widgets and Joe, her chief of competitive intelligence:
Mary: I need to know everything there is to know about the Zed Widgets Company.
Joe: Sure. What’s up?
Mary: We are thinking about introducing a new widget and I want to know what the competition is up to.
Joe: Anything in particular you are interested in?
Mary: Well, I can see their marketing efforts on the TV every day, so I am not really interested in that. I guess the most important thing is their cost structure. I want to know how much it costs them to make their widgets and where those costs are.
Joe: Right. Labor, overhead, materials. Got it. Is one part of the cost structure more important than another to you?
Mary. They pay about the same amount in labor and overhead that we do so I guess I am most interested in the materials; particularly Material X. That is our most expensive material.
Joe: I just read a report that indicated that the cost of material X is set to rise worldwide. Would you also like us to take a harder look at that and give you our estimate?
Mary: Absolutely.
While this example is simplistic, it makes the point. Intelligence, even in this one minor example within only one of the many parts of the traditional intelligence cycle is, or should be, at least, interactive, simultaneous, iterative. In the above example, this interaction between the intelligence professional and the CEO resulted in a more detailed and nuanced intelligence requirement going, as it did, from the very general, “Tell me everything…” requirement to the highly focused, “Tell me about Zed Company’s Material X costs and give me an estimate of where the price of Material X is likely to go.”

It is equally easy to imagine this kind of interaction within and between parts of the cycle as well. Collectors and analysts will inevitably go back and forth as the analysts attempt to add depth to their reporting and as the collector develops new collection capabilities. It is even likely that parts of the cycle that are not adjacent to one another will work very closely together, such as an analyst and the briefer responsible for the final dissemination of the product (in its oral form). Decisionmakers, too, may well remain involved throughout the process, seeking status reports and perhaps even modifying the requirement as new information or preliminary analysis becomes available.

The US military's Joint Staff Publication 2.0, Joint Intelligence, states the case more strongly:
"In many situations, the various intelligence operations occur nearly simultaneous with one another or may be bypassed altogether. For example, a request for imagery will require planning and direction activity but may not involve new collection, processing, or exploitation. In this example, the imagery request could go directly to a production facility where previously collected and exploited imagery is reviewed to determine if it will satisfy the request. Likewise, during processing and exploitation, relevant information may be disseminated directly to the user without first undergoing detailed all-source analysis and intelligence production. Significant unanalyzed combat information must be simultaneously available to both the commander (for time-critical decision-making) and to the intelligence analyst (for production of current intelligence assessments). Additionally, the activities within each type of intelligence operation are conducted continuously and in conjunction with activities in each of the other categories of intelligence operations. For example, intelligence planning is updated based on previous information requirements being satisfied during collection and upon new requirements being identified during analysis and production."
The situation is even more complex when you imagine an intelligence unit without teams of people working each of the discrete parts of the cycle. In situations involving small intelligence shops, where a single indivdual collects, processes, translates, analyzes, formats and produces the intelligence, the cycle breaks down completely.

The human mind simply does not work in this strictly linear fashion. Instead, it jumps from task to task. Imagine your own habits when researching a topic. You think a bit, search a bit, get some information, integrate that into the whole and then search some more. This approach inevitably leads to analytic dead ends, requiring more collection. At the same time, you are thinking about the form of the final report. If you are putting together an intelligence product that will use multimedia in its final form, for example, you are constantly on the lookout for relevant graphics or film footage you can use, regardless of its analytic value. To even suggest that you should collect all of your information, stop, and then go and do analysis without ever doing any further collection, is absurd.

One of the most recent and widely publicized innovations within the US national security community is the advent of “Intellipedia”, a Wikipedia-like tool for the intelligence community. Wikipedia, of course, is the online encyclopedia that is free to use and editable by anyone. It is one of the most popular sites on the web and, according to at least some research, is as accurate as other generally accepted encyclopedias. It has become, in its short lifespan, the tertiary source of first resort for both analysts and academics.

One of the things it is not is linear. There is no "Table of Contents" and researchers, authors and editors choose their own path through the resource.  Some people generate full articles; others only dive in occasionally to fix a particular fact or even a grammatical or spelling error. There are even full-fledged “edit wars” where a particular version of an especially hot topic changes back and forth between competing points of view until either one side gets tired and gives up or, more likely, the sides reach a version acceptable to all. In the end, it is openness and interactivity that give Wikipedia its strength.

The US national security community acknowledged the value of such a tool, at least with respect to its descriptive products, when it launched Intellipedia. Begun in April, 2006, Intellipedia, according to information from June, 2010, now has 250,000 registered users and is accessed over 2 million times per week. This effort, which is clearly far beyond the experimental stage, plainly shows that collaboration and interactivity – the anti-intelligence cycle -- are core to any modern description of the intelligence process.

Next:  Cycles, Cycles And More Damn Cycles!