I mean it.
The "intelligence cycle", as a depiction of how the intelligence process works, is a WWII era relic that is way past its sell-by date. It has become toxic. It no longer informs as much as it infects. It is less a cycle than a cyclops -- ancient, ugly and destructive.
I want it dead and gone, crushed, eliminated.
I don't care, frankly, what we have to do. Remove it from every training manual, delete it from every slide, erase it from every website.
Shoot it with a silver bullet, drive a wooden stake through its heart, burn the remains without ceremony and scatter the ashes.
(Geez, Kris, why don't you tell us how you really feel...)
OK, OK, so, yes, I am being intentionally provocative but I have been doing quite a bit of research on the intelligence process over the last several years and have come to the conclusion -- as have others before me -- that our current best depiction of this process, the so-called "intelligence cycle" is fatally flawed. Moreover, I believe these flaws have become so severe, so grievous, that continued adherence to and promotion of the cycle is actually counterproductive.
My intent, beginning on Monday and over the next several weeks, is to lay out the evidence I have gathered about the cycle itself, about attempts to save it from its worst flaws, about attempts to replace it altogether and let you decide for yourselves.
In the end, I intend to recommend (with no hubris intended and well aware of the possibility of hamartia) my own generalized version of the intelligence process; one which I think is more appropriate for the intelligence tasks of the 21st Century and which works, in both theory and practice, across all three major sub-disciplines of intelligence -- national security, business and law enforcement.
Next: The Disconnect Between Theory And Practice
Friday, May 20, 2011
Let's Kill The Intelligence Cycle (Original Research)
Posted by Kristan J. Wheaton at 11:37 AM 14 comments
Labels: intelligence, intelligence cycle, original research
Thursday, May 19, 2011
Why Good Data Isn't Enough (British Medical Journal And The University Of Michigan)
You are briefing the boss today and you are pretty excited. You were tasked to take a hard look at two different ways of doing the same thing -- the "old way" and the "new way". The old way was OK but your research clearly shows that the new way is much better.
You stand up in front of the boss. You know you are speaking a little quickly (you may not even be pausing all that much) and your voice is probably a little higher than it usually is -- but none of that matters. Your data is rock solid.
In fact, you have even put your great data into a pie graph that clearly identifies the validity of your position. This is your ace in the hole because you know the boss loves pie graphs.
All of this explains why you are stunned when the boss decides to continue to do things the old way.
Two interesting studies, one quite old and one brand new, explain why what you said mattered far less than how you said it.
http://www.bmj.com/content/318/7197/1527.full |
The first study, from 1999,"Influence of data display formats on physician investigators' decisions to stop clinical trials: prospective trial with repeated measure" from the British Medical Journal (hat tip to social network analysis expert Valdis Krebs and his prolific Twittering) asked a number of physicians to look at the exact same data using one of four different visualization techniques -- bar graph, pie graph, chart or "icons". You can see the four different charts in the picture to the right. Note: The test subjects only saw one of these, not all four together at once.
Now, I admit, these charts are a little dense at first. Basically you have 2 different groups, those who started the study with a good prognosis and those who started the study with a poor prognosis. You also have those who received the old treatment and those that received the new treatment.
The question was, based on these results, do you continue this study or not? The doctors involved in the study were all research physicians and used to seeing this kind of data and making these kinds of decisions.
Despite the fact that the data was exactly the same in all four images and that the data was overwhelmingly in support of the new treatment option, there was a satistically significant difference in the accuracy rate of the physician's decisions based exclusively on how the data was presented.
The least accurate? Pie and bar graphs. Charts did OK but the best option was the "icons".
This kind of iconic chart is probably new to many readers. It shows the impact of the treatments on every single patient in the study. While this kind of display yielded the most accurate results in the study, it was also the most disliked by the test subjects.
The overwhelming preference was for the chart, while a minority preferred the bar or pie graphs. Not only did none of the participants indicate that they preferred the icons, a significant number of them expressed derision at the format in their after action comments.
This study reminds me of a series of studies conducted by Ulrich Hoffrage and Gerd Gigerenzer at the Max Planck Institute in Berlin that demonstrate that expressing statistics using "natural frequencies" (e.g. 2 out of 20 instead of the more common 10%) leads to better understanding and better (i.e. more "Bayesian") reasoning (Jen Lee, Hema Deshmukh and I were able to replicate these results using a typical analytic problem so I believe that this effect is important in the context of intelligence as well).
The second piece of research is from the University of Michigan's Institute For Social Research and is still in pre-publication review. In what appears to be a very cleverly designed study, researchers looked at 200 telephone interviewers (100 male and 100 female).
They found that interviewers who spoke moderately fast, with lower pitched voices (if male) and with 4 to 5 natural pauses per minute were the most effective at getting people to listen to them.
Combining the results of these studies, it is easy to imagine that the most powerful presentation would be one using icons combined with a proficient speaker. The opposite (as demonstrated in the story that started this post) could reasonably be expected to perform less well -- even if the information were exactly the same.
As I have said before, like it or not, it is not enough to have good info, you have to be able to communicate it effectively as well. The flip side of this coin is equally important for intelligence professionals -- we may well be hard-wired to be biased towards high quality forms of communication, even if the quality of the content is second rate.
Posted by Kristan J. Wheaton at 2:01 PM 2 comments
Labels: Business Intelligence, communication, content, deception, form, presentation