Thursday, November 4, 2010
Is The US's COIN Doctrine Fighting The "Last War"? (Original Research)
Posted by
Kristan J. Wheaton
at
11:20 AM
2
comments
Labels: Afghanistan, COIN, counterinsurgency, Mercyhurst, original research, Somalia, thesis, United States, US Army Field Manual
Friday, August 13, 2010
Does Analysis Of Competing Hypotheses Really Work? (Thesis Months)
Forecasting Accuracy and Cognitive Bias in the Analysis of Competing Hypotheses
Posted by
Kristan J. Wheaton
at
9:00 AM
2
comments
Labels: ACH, analysis of competing hypotheses, Business Intelligence, Experiment, intelligence analysis, Mercyhurst, thesis
Wednesday, August 4, 2010
Where Do Terrorists Want To Go To College? (Thesis Months)
How to choose?
Not every college or university has a nuclear engineering program and, to the best of my knowledge, none of them offer a course in "Nuclear Weapons Building 101". Even if you find the right program with the right teachers and labs, etc., you probably also want to make sure you pick a place where you will not excite the interests of the local security services too much.
While it may be easier to imagine a terrorist group recruiting (or hiring) someone who already has the requisite knowledge to build a WMD, the attacks on the World Trade Center prove that Al Qaeda, at least, had (and may still have) the patience to execute a long-term plan involving the education of one of their best and brightest.
Cyndi Lee, in her thesis, Deadly Education: Evaluating Which Universities Are Attractive To International Terrorists, explores this issue in some depth.
Specifically, Cyndi used German universities as the testbed for her case study (The German university system provides most of the data, including indications of quality, that Cyndi needed to demonstrate her concept. The same system could be applied to any country where the data is available, however.)
Her goal was, first, eliminate any college or university that did not offer the possibility of getting the appropriate education and, second, rank order the remaining universities according to their perceived attractiveness to the international terrorist (in terms of program quality and ability to remain anonymous).
While Cyndi is careful to acknowledge the limitations of her study (not the least of which would be the unacceptably high false positive rate should her work be taken too literally), it does suggest that, in a resource constrained environment, there may well be ways to reduce uncertainty about possible terrorist courses of action and, in turn, allocate what resources that do exist more efficiently.
You can see the full text of the thesis below or download it directly from here.
Deadly Education -- Evaluating Which Universities Are Attractive To International Terrorists
Posted by
Kristan J. Wheaton
at
1:27 PM
2
comments
Labels: Germany, intelligence, intelligence analysis, September 11 2001, September 11 attacks, Terrorism, thesis, World Trade Center
Monday, August 2, 2010
Multi-Criteria Intelligence Matrices: A Promising New Method (Thesis Months)
"Although the experimental group indicated a lower level of knowledge in regards to the topic (Russia’s relationship to OPEC) and expressed a lower level of interest with the topic, both of which were found to be statistically significant, the experimental group was able to arrive at a broader range of possible Courses Of Action (COAs)."
"The average completion time for the control group was 70 minutes and the average time for the experimental group was 58.6 minutes. Therefore, when looking at the big picture, although the experimental group seemed less knowledgeable and less interested, they were able to arrive at a more complete list of relevant possible COAs, and they completed their analysis in less time."
In the end, the study suggests the method has promise and, with Lindsey's results in hand, it has more evidence to back it than many other, more widely taught, methods. I have embedded the full text below or you can download it here."While a few students in the control group provided one or two alternative COAs, the majority of the student-analysts merely provided one COA with few comparisons to any alternatives, thus not providing any insight to whether or not alternative solutions were considered. In the experimental group, the student-analysts, who used MCIM, provided a list of all possible COAs, and identified the importance of specific criterion or various factors to those COAs."
Related Posts:
Top 5 Intelligence Analysis Methods
The Effectiveness of Multi-Criteria Intelligence Matrices In Intelligence Analysis
Posted by
Kristan J. Wheaton
at
3:06 PM
0
comments
Labels: Consumer Reports, Experiment, intelligence, intelligence analysis, multi-criteria intelligence matrix, Scientific control, Social Sciences, thesis
Wednesday, May 12, 2010
A Brilliant Failure (Thesis Months)
Researchers rarely like to publish their failures. Want some proof? Next time you pick up a journal check to see how many of the authors are reporting experimental results that do not tend to confirm their hypotheses.
Sometimes, however, failures are so unexpected and so complete that they force you to re-think your fundamental understanding of a topic.
Think about it: It is not unreasonable to assume that a 50 lb cannonball and a 5 lb cannon ball dropped from the Leaning Tower of Pisa will hit the earth at different times. For more than 1000 years, this Aristotelian view of the way the world worked dominated.
The first time someone tested this idea (and, apparently, it wasn't Galileo, though he typically gets the credit) and the objects hit the ground at the same time, people were forced to reconsider how gravity works.
Shannon Ferrucci's thesis, "Explicit Conceptual Models: Synthesizing Divergent And Convergent Thinking", is precisely this type of brilliant failure.
Shannon starts with a constructivist vision of how the mind works. She suggests that when an intelligence analyst receives a requirement, it activates a mental model of what is known about the target and what the analyst needs to know in order to properly answer the question. Such a model obviously grows and changes as new information comes in and is never really complete but it is equally obvious that such a model informs the analytic process.
For example, consider the question that was undoubtedly asked of a number of intel analysts last week: What is the likely outcome of the the elections in the UK?
Now, imagine an analyst that was rather new to the problem. The model in that person's head might have included a general notion about the parliamentary system in the UK, some information on the major parties, perhaps, and little more. This analyst would (or should) know that he or she needs to have a better grasp of the issues, personalities and electoral system in the UK before hazarding anything more than a personal opinion.
Imagine a second, similar, analyst but imagine that person with a significantly different model with respect to a crucial aspect of the election (For example, the first analyst believes that the elections can end in a hung parliament and the second analyst does not believe this to be the case).
Shannon argues that making these models explicit, that is getting them out of the analyst's head and onto paper, should improve intelligence analysis in a number of ways.
In the first place, making the models explicit highlights where different analysts disagree about how to think about a problem. At this early stage in the process, though, the disagreement simply becomes a collection requirement rather than the knock-down, drag-out fight it might evolve into in the later stages of a project.
Second, comparing these conceptual models among analysts allows all analysts to benefit from the good ideas and knowledge of others. I may be an expert in the parliamentary process and you may be an expert in the personalities prominent in the elections. Our joint mental model of the election should be more complete than either of us will produce on our own.
Third, making the model explicit should help analysts better assess the appropriate level of confidence they should have in their analysis. If you thought you needed to know five things in order to make a good analysis and you know all five and your sources are reliable, etc, you should arguably be more confident in your analysis than if you only knew two of those things and the sources were poor. Making the model explicit and updating it throughout the analytic process should allow this sort of assessment as well.
Finally, after the fact, these explicit models provide a unique sort of audit trail. Examining how the analysts on a project thought about the requirement may go a long way towards identifying the root causes of intelligence success or failure.
Of course, the ultimate test of an improvement to the analytic process is forecasting accuracy. While determining accuracy is fraught with difficulty, if this approach doesn't actually improve the analyst's ability to forecast more accurately, conducting these explicit modeling exercises might not be worth the time or resources.
So, it is a question worth asking: Does making the mental model explicit improve forecasting accuracy or not? Shannon clearly expected that it would.
She designed a clever experiment that asked a control group to forecast the winner of the elections in Zambia in October 2008. With the experimental group, however, she took them through an exercise that required students to create, at both the individual and group levels, robust concept maps of the issue. Crunched for time, her experiment focused primarily on capturing as many good ideas and the relationships between them as possible in the conceptual models the students designed (Remember this -- it turns out to be important).
Her results? Not what she expected...

In case you are missing it, the guys who explicitly modeled their problem did statistically significantly worse -- way worse -- than those that did not.
It took several weeks of picking through her results and examining her experimental design before she came up with an extremely important conclusion: Convergent thinking is as important as divergent thinking in intelligence analysis.
If that doesn't seem that dramatic to you, think about it for a minute. When was the last time you attended a "critical thinking" course which spent as much time on convergent methods as divergent ones? How many times have you heard that, in order to fix intelligence, "We need to connect more dots" or "We have to think outside the box" -- i.e. we need more divergent thinking? Off the top of your head, how many convergent thinking techniques can you even name?
Shannon's experiment, due to her time restrictions, focused almost exclusively on divergent thinking but, as Shannon wrote in her conclusion, "The generation of a multitude of ideas seemed to do little more than confuse and overwhelm experimental group participants."
Once she knew what to look for, additional supporting evidence was easy to find. Iyengar and Lepper's famous "jam experiment" and Tetlock's work refuting the value of scenario generating exercises both track closely to Shannon's results. There have even been anecdotal references to this phenomena within the intelligence literature.
But never has there been experimental evidence using a realistic intelligence problem to suggest that, as Shannon puts it, "Divergent thinking on its own appears to be a handicap, without some form of convergent thinking to counterbalance it. "
Interesting reading; I recommend it.
Explicit Conceptual Models: Synthesizing Divergent and Convergent Thinking
Posted by
Kristan J. Wheaton
at
1:02 PM
2
comments
Labels: Critical thinking, Experiment, intelligence, intelligence analysis, Mercyhurst, thesis
Monday, April 12, 2010
How Large Is The World's Intelligence Industry? Now We Know... (Thesis Month)
One of the questions that has really bothered me over the years concerns the size of the national security intelligence "industry" worldwide. When you add it all up, how much money do the states of the world spend on intelligence and how many people are involved in government intelligence work?
These questions are important. There is a popular impression in much of the world that "intelligence is everywhere", that it is both all powerful and omnipresent. Creating or encouraging this impression in dictatorial countries might even be part of the system of repression. Knowing the answers to these questions could help reformers more accurately assess their risks.
Even in democratic countries, however, understanding the resource limits of the national security intelligence apparatus at the broadest possible levels, where the need for citizens to know where their money is being spent can be appropriately balanced with the legitimate operational concerns of the working intelligence professional, seems to make sense.
From a more provincial standpoint, it also seems important for educational institutions to have some sort of a feel for the need for trained professionals in intelligence work if the university model is ever going to supplant government training as the primary way into the intelligence communities of the world.
The answer to this question, however, is obviously difficult to uncover. Most countries do not want to discuss how much they spend on intel each year. Oftentimes, it is even difficult to figure out which organizations within a country are actively engaged in intelligence work.
It is with great pleasure, then, that I announce the final results:
The national security intelligence industry accounts for about $106 billion dollars a year and employs about a million people worldwide.These are the numbers generated by Chris Hippner in his interesting and exhaustive thesis titled, A Study Into The Size Of The World's Intelligence Industry.
While Chris has done a good (extraordinary, really) job of collecting as many facts and figures as he could regarding the intel budgets of every country on the planet, he had to rely on estimates for many of them.
These estimates are based on GDP and on the spending patterns of countries where the data is available, a method which Chris readily admits is fraught with some difficulty (I note with some interest, though, that Chris has posted a note to his online thesis encouraging people to send him more accurate figures. It will be interesting to see how many people take him up on the offer...).
I am also sure that Chris has missed some organizations. It is virtually certain that there are organizations out there which are well known to people living in a particular country to be wholly controlled by that country's intelligence apparatus for which Chris has not accounted. Such errors are essentially unavoidable given the global scope of his thesis work.
Likewise, Chris simply did not have time to examine either the growing presence of intelligence units in law enforcement or business (My own guess is that this would approximately double the total value of the industry).
All that said, this thesis does exactly what needed to be done -- give us all a starting point for further research and refinements.
A full copy of the thesis is located below or you can go to Chris's site on Scribd.com for other viewing and download options.
A Study Into the Size of the World's Intelligence Industry
Posted by
Kristan J. Wheaton
at
12:18 PM
0
comments
Labels: Government, Gross domestic product, intelligence, Intelligence agency, Intelligence Community, national security, size, thesis
Thursday, April 16, 2009
Lessons In HUMINT Reliability For Intelligence Professionals (Thesis)
It is that special time of year again; when theses begin to bloom! That's right, as usual, our grad students here at Mercyhurt are busily writing and defending a whole new crop of interesting theses on a variety of topics relevant to intelligence and intelligence analysis.
The first up this season is one by George P. (Pat) Noble titled, Diagnosing Distortion In Source Reporting: Lessons For HUMINT Reliability From Other Fields. Pat states, in his abstract, that his intent is to explore "how source reporting can be distorted at each stage of the human intelligence (HUMINT) process within the United States Intelligence Community (USIC) and how that distortion may impact perceptions of source reliability."
Pat takes a hard look at not only the distortions but also how practitioners in fields as diverse as journalism and medicine, jurisprudence and anthropology, correct for these issues. He pushes his analysis one step further, though, and seeks to generalize these lessons learned for the intelligence collector, analyst, editor and consumer.
Pat is an intelligence analyst for the FBI with a lengthy resume and quite a bit of experience in a variety of fields. He came to us from the FBI on a sabbatical a few years ago to get his Masters and is now back with the FBI. He says he can be found on Intellipedia and A-Space for anyone who is interested.
Posted by
Kristan J. Wheaton
at
12:40 PM
0
comments
Labels: intelligence, intelligence analysis, Pat Noble, Resource, source reliability, thesis
Thursday, October 2, 2008
Fast And Frugal Conflict Early Warning In Sub-Saharan Africa (Original Research)
With this as background, one of Mercyhurst's graduate students, Bradley Perry, recently took a stab at trying to come up with such a system in his thesis, "Fast and Frugal Conflict Early Warning in Sub-Saharan Africa: The Role of Intelligence Analysis."
Bradley was in a unique position to write this thesis. In the first place, he came to us a bit later in life than most grad students, having spent a number of years in Ghana. In addition, while technically here, he completed his strategic intelligence project (on local reactions to a planned expansion of a national park) from a tent in Malawi. He was in Kenya about the same time as the recent upheavals there and is now working for iJet (where he was a member of the iJet team that recently took a share of the prize at the ODNI's Open Source Conference).
Riffing on Gerd Gigerenzer's research (outlined in his fascinating book, Gut Feelings) regarding "fast and frugal" evaluative systems, Bradley went looking for "good enough" indicators of potential conflict that he could chain together to form a predictive model. He found three promising indicators in the literature, political freedom, ethnic homogeneity, and income inequality, and proceeded to build exactly what he wanted to build -- a fast and frugal model for conflict prediction for Sub-Saharan Africa.
He tested the model on previous conflicts and got reasonably good results. The model tended to overpredict conflict in some cases but never failed to predict confict where one ultimately occurred. He was also able to rank order the potential conflicts in Sub-Saharan Africa by likelihood. Using recent data and plugging it into his model, he believes that, from most likely to least likely, Swaziland, Somalia, Equatorial Guinea, Angola, Congo (Brazzaville), Congo (Kinshasa), Zimbabwe, Rwanda, Cameroon, Cote D'Ivoire, Eritrea, Chad, Guinea and Sudan will see violent conflict (See the map below from the thesis).

Beyond the model and the predictions it makes, the literature review on other early warning systems concerning Africa and on the validity of various indicators that predict conflict is definitely worth the read. It is an excellent work and, if interested, you can download the entire thesis here.
Related Posts:
Non-State Actors In Sub-Saharan Africa
Security Sector Reform In Sub-Saharan Africa
Posted by
Kristan J. Wheaton
at
9:54 AM
2
comments
Labels: Africa, AFRICOM, Bradley Perry, ISN, Resource, Sub-Saharan Africa, thesis, warning