Friday, March 23, 2012
Part 13 - The Whole Picture (Let's Kill The Intelligence Cycle)
Part 10 -- The New Intelligence Process
Part 11 -- The New Intelligence Process: The First Picture
Part 12 -- The New Intelligence Process: The Second Picture
In the end, whether you accept this new model of the intelligence process or not, it is clear that the hoary image of the intelligence cycle needs to be put to rest. Whether you would do that with full honors or, as I advocate, with the use of explosives, is irrelevant. The cycle, as should be clear by now, needs to go.
To summarize, the cycle fails on three counts at least: We cannot define what it is and what it isn't, it does not match the way intelligence actually works in the 21st Century and it does not help us explain our processes to the decisionmakers we support. Efforts to fix these flaws have not worked and, furthermore, this is all widely recognized by those who have studied the role and impact of the cycle.
In addition, the community of intelligence professionals (and I include academics who study intelligence in this group) will have to be the ones to lay the cycle to rest. Not only does no one else care, but also the community of intelligence professionals has, as the WMD report noted, "an almost perfect record of resisting external recommendations."
Yes, the interregnum will be difficult. The decisionmakers we support, the professionals with whom we work and the students we teach will all ask -- and deserve -- good answers. These answers will come slowly at first. In fact, at the outset, we may only be able to "teach the controversy", as it were.
Hopefully, over time, though, the need for a new vision of the intelligence process will drive intellectual curiosity and, through the iterative process of creation and destruction, something more robust will emerge; an improved model that will stand the tests of the next 60 years. While I have clearly already placed my bets in this regard, I will be happy if the community of intelligence professionals merely recognizes the need to move beyond its historical constraints, accepts this siren's call for what it is, plugs its ears and sails off in a new direction - any direction.
Because anything would be better than continuing to pretend that the world has not really changed since the 1940's. Anything would be better than continuing to spend countless wasted hours explaining and attempting to justify something that should have been retired long ago. Anything, in short, would be better than continuing to lie to ourselves.
Posted by
Kristan J. Wheaton
at
2:16 PM
0
comments
Labels: experimental scholarship, intelligence, intelligence cycle, Let's Kill The Intelligence Cycle, LKTIC, original research
Wednesday, March 21, 2012
Part 12 -- The New Intelligence Process: The Second Picture (Let's Kill The Intelligence Cycle)
Part 10 -- The New Intelligence Process
Part 11 -- The New Intelligence Process: The First Picture
(Note: I started this series of posts many months ago with the intent of completing it in short order. Life, as it so often does, got in the way... If you are new to the series or you have forgotten what the excitement was all about, I recommend beginning at the beginning. For the rest of you, thank you for your patience!)
At the highest level, intelligence clearly supports the decisionmaking process. Understanding this is a first step to understanding what drives intelligence requirements and what defines good intelligence products. This is the message of the first picture.
But what about the details? Broad context is fine as far as it goes, but how should the modern intelligence professional think about the process of getting intelligence done? The second picture is designed to answer these questions.
![]() |
The Second Picture |
The single most important thing to notice about this image is that it imagines intelligence as a parallel rather than as a sequential process. In this image of the process, there are four broad themes, or sub-processes, moving across time from a nebulous start to a fuzzy finish, with each theme rising to a high point in terms of emphasis at different points in the process. Also intended by this image is the idea that each theme constantly reflects back and forth among the other three, influencing them as they each influence each other at every point in time.
Let me anticipate an initial objection to this picture -- that the intelligence process has a "start" and a "finish". The intelligence function, to be sure, is an ongoing one and this was one of the implied lessons of the first picture. Having made that point there, here I think it is important to focus on how intelligence products are actually generated. In this respect, clearly, there is a point at which a question (an intelligence requirement) is asked. It may be indistinct, poorly formed or otherwise unclear, but the focus of an intelligence effort does not exist in any meaningful way until there is a question that is, in some way, relevant to the decisionmaking process the intelligence unit supports.
Likewise, there is a finish. It may take place in an elevator or in a formal brief, in a quick email or in a 50 page professionally printed and bound document, but answering those questions, i.e. the dissemination of the intelligence product, in whatever form, signifies the end of the process. Yes, this process then begins immediately anew with new questions, and yes, there are always multiple questions being asked and answered simultaneously but neither observation invalidates the general model.
Mental Modelling
Modeling should go on throughout the entire intelligence process, however. As new information comes in or analysis gets produced, the model may well grow, shrink or morph as the concepts and the relationships between those concepts become more clear. At some point (typically early) in the intelligence process, however, the emphasis shifts away from modeling and towards collecting, analyzing and producing. While mental modeling doesn’t become unimportant, it does begin to lose importance as less time is devoted to modeling and more to the other three functions.
Collection, like modeling, never stops. Intelligence professionals will continue to collect information relevant to the particular requirement right up to the day the final product is published. In fact, collection on a particularly difficult problem (i.e. almost all of them) will often continue after publication. Decisionmakers and analysts alike want to know if they were correct in their key assumptions, how accurate the final product was and all understand a need to continue to track particularly important requirements over time.
The next sub-process to take precedence is analysis. As with both modeling and collection, analysis begins almost immediately. Tentative answers leap to mind and, in simple cases or where time is a severe constraint, these initial responses may have to do. Analysis doesn’t really move to the forefront, however, until the requirement is understood and enough collection has taken place for the analyst to sense that adequate information exists to begin to go beyond tentative analyses and take a crack at answering the overall question or questions.
Analysis is where the raw material of intelligence, information, gets turned into products that address the decisionmaker’s requirements. It is also the task most fraught with difficulties. From the type of information used (typically unstructured) to the methods used to analyze this information to the form of the final product, analysts face enormous practical and psychological difficulties. While the goal is clear – reduce the decisionmaker’s level of uncertainty – the best ways to get there are often unclear or rely on untested or poorly tested methods.
The final sub-process is production (which, for our purposes here, also includes dissemination). As with all the other functions, it, too, begins on day one. It is clearly, however, the least important function at the outset of the intelligence process. Still, intelligence professionals do give some thought (and experienced professionals have learned to give more than a little thought) up front to the form and nature of the final product at the beginning of the process.
Production is an incredibly important but often under-appreciated function within the intelligence process. If intelligence products are not accessible, i.e. packaged with the decisionmaker in mind, then they are unlikely to be read or used. Under such circumstances, all of the hard work done by intelligence professionals up to this point is wasted. On the other hand, there is a fine line between making a document or other type of intelligence report accessible and selling a particular position or way of thinking about a problem. Intelligence professionals have to steer clear of those production methods and “tricks” that can come across as advertising or advocacy. Production values should not compromise the goal of objectivity.
Likewise, some intelligence professionals associate high production values with pandering to the decisionmaker. These professionals see adding multimedia, graphics, color and other design features to an intelligence product to be unnecessary “chrome” or “bling”. These professionals, many from earlier generations, think that intelligence products “should stand on their own” and that the ease with which such “tricks” are used in modern production is not an excuse to deviate from time-honored traditions in production.
The guiding principle here, of course, is not what the intelligence professional thinks but what the decisionmaker the intelligence professional is supporting thinks. Some decisionmakers will, of course, prefer their intelligence products in a simple text-based format. Others, including many business professionals, will want less text and more supporting data, including charts and graphs. Some (and the demand for this may well increase in the future) will want their reports in a video format for use on their personal multimedia device.
Posted by
Kristan J. Wheaton
at
2:45 PM
1 comments
Labels: experimental scholarship, intelligence, intelligence cycle, Let's Kill The Intelligence Cycle, LKTIC, original research
Monday, June 6, 2011
Part 10 -- The New Intelligence Process (Let's Kill The Intelligence Cycle)
Few of the models attempted to broaden their applicability to either the business or law enforcement sectors. Very few of these models are based on any sort of systematic, empirically based research so, even if they more or less accurately describe how intelligence is done today, it remains unclear if these models are the best that intelligence professionals can do.
While some might see the cycle's staying power as a sign of its strength, I prefer to see its lack of value to decisionmakers, its inability to shed little (if any) light on how intelligence is actually done and the various intelligence communities' failure to be able to even consistently define the cycle as hallmarks of what is little more than a very poor answer to the important -- and open -- theoretical question: "What is the intelligence process?"
It is to resolving this question that I will devote the remaining posts in this series.
Posted by
Kristan J. Wheaton
at
3:16 PM
0
comments
Labels: experimental scholarship, intelligence, intelligence cycle, Let's Kill The Intelligence Cycle, LKTIC, original research
Wednesday, June 1, 2011
Part 8 -- Tweaking The Intelligence Cycle (Let's Kill The Intelligence Cycle)
A number of scholars and practitioners have attempted, over the years, to rectify the problems with the intelligence cycle. While, from a theoretical standpoint, virtually all of these attempts have resulted in a more nuanced understanding of the intelligence process, none has caught on among intelligence professionals and none has been able to de-throne the intelligence cycle as the dominant image of how intelligence works.
These new schools of thought fall into two general patterns: Those that are tweaking the intelligence cycle in order to bring it closer to reality and those that seek to overhaul the entire image of how intelligence works (which I will discuss tomorrow).
Several authors have sought to modify the intelligence cycle in order to create a more realistic image of how intelligence “really” works. While some restructuring of the intelligence cycle is done within virtually every intelligence schoolhouse, the four authors most commonly discussed include Lisa Krizan, Gregory Treverton, Mark Lowenthal and Rob Johnston. These authors seek to build upon the existing model in order to make it more realistic.
![]() |
From: Intelligence Essentials For Everyone |
![]() | |
From: Reshaping National Intelligence |
Lowenthal in his classic, Intelligence: From Secrets To Policy, acknowledges the flaws of the traditional intelligence cycle which he calls “overly simple”. His version, reproduced below, demonstrates “that at any stage in the process it is possible – and sometimes necessary – to go back to an earlier step. Initial collection may prove unsatisfactory and may lead policymakers to change the requirements; processing and exploitation or analysis may reveal gaps, resulting in new collection requirements; consumers may change their needs and ask for more intelligence. And, on occasion, intelligence officers may receive feedback.” Lowenthal's revised model, more than any other, seems to me to capture that the intelligence process takes place in a time constrained environment.
![]() |
From Intelligence: From Secrets To Policy |
Perhaps the most dramatic re-visioning of the intelligence cycle, however, comes from anthropologist Rob Johnston in his book, Analytic Culture In The US Intelligence Community. Johnston spent a year studying the analytic culture of the CIA in the time frame immediately following the events of September 11, 2001.
His unique viewpoint resulted in an equally unique rendition of the traditional intelligence cycle, this time from a systems perspective. This complicated vision (reproduced below) includes “stocks” or accumulations of information; “flows” or certain types of activity; “converters” that change inputs to outputs and “connectors”, which tie all of the other parts together.
While, according to Johnston, “the premise that underlies systems analysis as a basis for understanding phenomena is that the whole is greater than the sum of its parts”, the subsequent model does not seek to replace the intelligence cycle but only to describe it more accurately: “The elements of the Intelligence Cycle are identified in terms of their relationship with each other, the flow of the process and the phenomena that influence the elements and the flow.”
![]() |
From: Analytic Culture In The US Intelligence Community |
Next: Departures From The Intelligence Cycle
Posted by
Kristan J. Wheaton
at
2:45 PM
0
comments
Labels: experimental scholarship, intelligence, intelligence cycle, Let's Kill The Intelligence Cycle, LKTIC, original research
Tuesday, May 31, 2011
Part 7 -- Critiques Of The Cycle: Cycles, Cycles And More Damn Cycles (Let's Kill The Intelligence Cycle)
Part 2 -- "We''ll Return To Our Regularly Scheduled Programming In Just A Minute..."
Part 3 -- The Disconnect Between Theory And Practice
Part 4 -- The "Traditional" Intelligence Cycle And Its History
Part 5 -- Critiques Of The Cycle: Which Intelligence Cycle?
![]() |
http://home.ubalt.edu/ntsbarsh/opre640/opre640.htm |
Every military officer, policeman or business student who has attended even relatively low level training in their profession is familiar with a model of decisionmaking that typically includes defining the question, collecting information relevant to the question, analyzing alternatives or courses of action, making a recommendation and then communicating or executing the recommendation (see image to the right).
This, in turn, results in two negative consequences for intelligence. First, decisionmakers will, at best, see intelligence as “nothing special”. The process used appears, from their perspective, to be just a glorified decisionmaking process.
More insidiously, however, such a perception clouds the true role of intelligence in the decisionmaking process. Decisionmakers, trained in and used to working with the decisionmaking process, will look for intelligence professionals to provide the same kinds of outputs – recommendations – as their process does.
In short, because the intelligence cycle creates the impression in the minds of many decisionmakers (particularly those unfamiliar with intelligence but well -educated in their own operational arts), that intelligence is “just like what I do”, only with a different name, the value of intelligence is more difficult to explain to decisionmakers than it needs to be.
Next: Tweaking The Intelligence Cycle
Posted by
Kristan J. Wheaton
at
12:46 PM
0
comments
Labels: experimental scholarship, intelligence, intelligence cycle, Let's Kill The Intelligence Cycle, LKTIC, original research
Friday, July 2, 2010
Part 8 -- What Else Did You Learn? (Teaching Strategic Intel Through Games)
Many students have provided excellent feedback for improving the course. The single most requested ‘tweak’ was, surprisingly, to include more games like Defiant Russia. The old-school boardgame with its dice, hex maps and counters seemed to encourage a thoughtful, collaborative (at least among the players on each team) learning experience.
In addition, the idea of replaying history was clearly appealing to many of the students. Only one of the students had played anything similar prior to this class and it was unclear if any would voluntarily play something like Defiant Russia again but the overwhelmingly positive response to the game in the feedback suggests that there is still a place for these types of games in educational environments.
Another goal, however, was to lock in knowledge important to the practice of strategic intelligence. This kind of learning requires reflection and reflection takes time. Clearly, the right answer lies in properly balancing these competing goals. How to do that in the context of a specific syllabus is the real question and one that I will spend the next several months pondering.
Next:
Wrapping it up
Posted by
Kristan J. Wheaton
at
10:57 AM
0
comments
Labels: Board game, experimental scholarship, Game based learning, intelligence, strategic intelligence
Thursday, July 1, 2010
Part 7 -- What Did The Students Think About It? (Teaching Strategic Intel Through Games)
The SIRs actually measure a number of variables and identifying those that might be most closely associated with the underlying pedagogy of a course are difficult to identify. Instead, I chose to look at just one of the SIR-generated ratings, the Overall Evaluation of the course. This is clearly designed to be an overall indicator of effectiveness. A large change here (in either the positive or the negative direction) would seem to be a clear indication of success or failure from the student's perspective.
Furthermore, my assumption at the beginning of the course was that there would be a large change in one direction or the other. I assumed that students would either love this approach or hate it and that this would be reflected in the SIR results. The chart below, which contains the weighted average of the Overall Evaluation score (1-5 with 5 being best) for all classes taught in a particular year, indicates that I was wrong:
Posted by
Kristan J. Wheaton
at
12:36 PM
0
comments
Labels: education, experimental scholarship, Game based learning, Games, intelligence, Learning, Methods and Theories, Research, strategic intelligence
Wednesday, June 30, 2010
Part 6 -- So, How Did It All Work Out? (Teaching Strategic Intel Through Games)
While mostly anecdotal, the available evidence suggests that students significantly increased their ability to see patterns and connections buried deeply in unstructured data sets, my first goal. This was particularly obvious in the graduate class where I required students to jot down their conclusions prior to class.
Week 2 Response – “This game (The Space Game: Missions) was predominantly about budgets and space allocation…. Strategy and forethought go into where you place lasers, missile launchers and solar stations, so that you don’t run out of minerals to power those machines and so repair stations are available for those that are rundown. It’s clear that resource and space allocation are key elements for a player to win this game, just as it is for the Intelligence Community and analysts to win a war.”
Week 8 Response: “I think if Chess dropped acid it’d become the Thinking Machine. When the computer player was contemplating its next move colorful lines and curves covered the board… To me, Chess was always a one-on-one game; a conflict, if you will, between black versus white… Samuel Huntington states up front that he believes that conflict will no longer be about Princes and Emperors expanding empires or influencing ideologies, but rather about conflicts among different cultures: “The great divisions among humankind and the dominating source of conflict will be cultural.” Civilizations and cultures are not black and white, however; they’re not defined by nation-state borders. There are colors and nuances in culture requiring a change in mindset and in strategy to approach these new problems.”
Note: There is a widespread belief among intelligence professionals that teaching critical thinking methods will improve intelligence analysis (See David Moore’s comprehensive examination of this thesis in his book Critical Thinking And Intelligence Analysis). A minority of authors are less willing to jump on this particular bandwagon (See Behavioral and Psychosocial Considerations in Intelligence Analysis: A Preliminary Review of Literature on Critical Thinking Skills by the Air Force Research Laboratory, Human Effectiveness Division) citing a lack of empirical evidence pointing to a correlation between critical thinking skills and improved analysis.
While such a list may not be perfect, there is certainly nothing on it that is inconsistent with good intelligence practice. Likewise, when reading the representative example above with these criteria in mind, the increase in nuance, the willingness to challenge an acknowledged authority, the nimble leaps from one concept to another all become even more obvious. The growth evident in the second example is even more impressive when you consider that the Huntington reading was not required. The majority of the students in the class showed this kind of growth over the course of the term both in the quality of the classroom discussions and in their written reports.
In addition to seeing an improvement in students’ ability to detect deep patterns in complex and disparate data sets, I also wanted that increased ability to translate into better quality intelligence products for the decisionmakers who were sponsoring projects in the class.
Here the task was somewhat easier. I have solicited and received good feedback from each of the decisionmakers involved in the 78 strategic intelligence projects my students have worked on over the last 7 years. This feedback, leavened with a sense of the cognitive complexity of the requirement, yields a rough but useful assessment of how “good” each final project turned out to be.
Mapping this overall assessment onto a 5 point scale (where a 3 indicates average “A” work, a 2 and 1 indicates below and well below "A" work respectively, a 4 indicates "A+" or young professional work and a 5 indicates professional quality work), permits a comparison of the average quality of the work across various years.
Note: “A” is average for the Mercyhurst Intelligence Studies seniors and second year graduate students permitted to take Strategic Intelligence. In order to be employable in this highly competitive field, the program requires students to maintain a cumulative 3.0 GPA simply to stay in the program and encourages students to maintain a 3.5 or better. In addition, the major is widely considered to be “challenging” and those who do not see themselves in the career of intelligence analysis, upon reflection, often change majors. As a result, GPAs of the seniors and second year graduate students who remain with the program are often quite high. The graduating class of 2010, for example, averaged a 3.66 GPA.
There are, to be sure, a number of possible reasons to explain the surge in quality evidenced by the most recent year group. The students could be naturally better analysts, the quality of instruction leading up to the strategic course could have dramatically improved, the projects could have been simpler or the results could be a statistical artifact.
None of these reasons, in my mind, however, hold true. While additional statistical analysis has yet to be completed, the hypothesis that games-based learning improves the quality of an intelligence product appears to have some validity and is, at least, worthy of further exploration.
My third goal for a games-based approach was to better lock in those ideas that would likely be relevant to future strategic intelligence projects attempted by the students, most likely after graduation. To get some sense if the games-based approach was successful in this regard, I sent each of the students in the three classes a letter requesting their general input regarding the class along with any suggestions for change or improvement. I sent these letters approximately five months after the undergraduate classes had finished and approximately 2.5 months after the end of the graduate class.
Seventeen of the 75 students (23%) who took one of the three courses responded to the email and a number of students stopped by to speak to me in person. In the end, over 40% of those who took the class responded to my request for feedback in one way or another. This evidence, while still anecdotal, was consistent – games helped the students remember the concepts better.
Comments such as, “Looking back, I can remember a lot of the concepts simply because the games remind me of them” or “I am of the opinion that the only reason that the [lessons] stood out was because they were different from any other class most students have taken” were often mixed in with suggestions on how to improve the course. The verbal feedback was even more encouraging, with reports of discussions and even arguments centered on the games and their “meaning” weeks and months after the course was completed.
The evidentiary record, in summary, is clearly incomplete but encouraging. Games–based learning appears to have increased intelligence students’ capacity for sensemaking, to have improved the results of their intelligence analysis and to allow the lessons learned to persist and even encourage new exploration of strategic topics months after the course had ended.
Next:
What did the students think about it?
Posted by
Kristan J. Wheaton
at
12:29 PM
2
comments
Labels: Critical thinking, experimental scholarship, Game based learning, intelligence analysis, Intelligence Community, United States Intelligence Community