Showing posts with label psychology. Show all posts
Showing posts with label psychology. Show all posts

Monday, January 27, 2014

Strawman - Or How I Read A Gamebook (That's Right - A Gamebook) And Became A Less Biased Analyst

http://www.amazon.com/Strawman-Kristan-J-Wheaton-ebook/dp/B00HG3XN6W

"A madman is on the loose! Can you play his mind games and win? Can you catch him before he strikes again?"

That is how my co-author, Mel Richey, and I introduce our new gamebook, Strawman (now on sale at Amazon).  It's part adventure novel, part game, and part teaching tool.  

Let me break that down for you a bit...

Inspired by our work on the Intelligence Advanced Research Project's Activity's SIRIUS project (you know, the one where they are trying to build video games to teach analysts about cognitive biases?) and our colleagues on the Boeing team, Mel and I decided to try to do something similar.  

We didn't have the kind of money necessary to design a video game so we decided to write a "gamebook".  What's a gamebook, you ask?  Well, some people call it "interactive fiction" but most people remember it as the old "choose your own adventure" style book.  This format forces readers to make decisions and enjoy success or suffer the consequences as they move through the story.  Strawman, for example, has eight possible endings, depending on how well you do in each scenario.

Recently made popular again by the phenomenal success of Ryan North's To Be Or Not To Be gamebook, this format is also perfect for a guided learning experience. 

That's right!  Wrapped up in the middle of this adventure story filled with spies and terrorists and mad bombers, are lessons about how cognitive biases affect our judgement and decisionmaking.  In fact, understanding these lessons are crucial to the reader's success. Each scenario hinges on the reader's ability to spot the bias and to take corrective action in order to successfully move the story forward.

Don't get me wrong - there is nothing artificial about our scenarios.  We built each of them around real world incidents where bias was the cause of intelligence failure or around experiments where bias was successfully elicited.  

Strawman only covers three of IARPA's "Big Six" cognitive biases:  Projection, Representativeness and Anchoring.  The other three biases will have to wait for volume 2...

In addition to teaching readers how to recognize these three biases "in the wild", however, Strawman also teaches a way to mitigate their effects.  Through a series of guided exercises, we try to teach the reader to be able to put him or her self into someone else's shoes - to see the situation from the perspective of the other guy.  While this approach will not make up for facts that are missing from an analysis, we believe that it will help analysts weigh the evidence they do have more accurately.

We tried to write Strawman at an advanced high school or early college level but we have been pleasantly surprised at how many of our reviewers on both sides of 20 really enjoyed the book. Here are a few of their comments:
“Recognition of cognitive bias in one’s own thinking as well as in others is a key skill for effective analysis. New and imaginative methods for teaching this skill, such as Strawman, are badly needed.” -- Richards J. Heuer, Jr., Former CIA Analyst, author of The Psychology of Intelligence Analysis 
"What I like the most about Wheaton and Richey’s Strawman is that, even though it’s billed as a choose-your-own-adventure style gamebook, it actually feels a bit like a videogame, as it introduces readers (players?) to three psychological biases through a set of early missions that lead into the main story, like a good set of tutorial levels for a videogame. Also like a good videogame, Strawman’s story ingeniously provides an in-game piece of hardware to help scaffold player learning, helping readers see situations from different perspectives. This crutch is taken away for later missions, making for a nicely-designed difficulty curve. Meanwhile, readers are drawn into a compelling story arc that builds steam and brings it all together with a satisfying final mission that’s straight out of NCIS or 24." -- Mark Chen, Author of Leet Noobs: The Life And Death Of An Expert Player Group In World Of Warcraft 
“An innovative, engaging read. With its unusual format and accessible writing style it’s perfect for high school or college crowds all the way up to professionals in the field. If you’re interested in cognitive bias, Strawman will teach you how to identify and how to eliminate it.“ --Josh Klein, hacker, author of Reputation Economics: Why Who You Know Is Worth More Than What You Have and host of NatGeo's The Link 
"Strawman is a must read for all entry level intelligence analysts, in any area - military, government or industry. Mel Richey and Kris Wheaton have produced a very interesting and eminently sensible approach to learning about the perils of cognitive biases and the adverse effects they can have on decision-making. Moreover, those who teach intelligence will find that Strawman helps them bring new and profitable excitement to any class." -- James S. Cox Ph.D. Brigadier-General (Ret'd), Vice President, Academic Programs, Canadian Military Intelligence Association
Strawman is currently available for download at Amazon's Kindle Store.  Don't have a Kindle?  No worries!  Amazon has free Kindle Reader Apps for almost every device, including PCs, smartphones and tablets.  

We hope you enjoy Strawman!

Wednesday, June 12, 2013

Teaching People To Overcome Biases With Games At Origins, Global Intelligence Forum

Inspired by the announcement of Intelligence Advanced Research Project
Agency's Sirius Program a couple of years ago, I set out to design a tabletop (i.e. card) game that would help people learn more about cognitive biases and hopefully learn to limit the effects of some of the worst of them.

My first two attempts were ... OK ... but I couldn't quite get them to work.  Either they took too long to play or playtesting suggested that the learning effects were too small. 

One day, though, it hit me - a design that was both manageable in terms of time and had good evidence to suggest that it would teach people not only how to identify bias situations in real life but also to apply effective strategies for mitigating the effects of those biases!  In short, I had a good game with proven mechanics and a testable hypothesis -- I was off to the races!

This summer (finally), I am taking my best design, The Mind's Lie, on the road to actually test it.  First up is the Origins Game Fair this week in Columbus, Ohio.  I need participants to test the game and I figured where better to go than one of the world's largest tabletop game fairs?

We have a booth and will be recruiting potential participants for an experiment to see if the game actually works (we are also recruiting for new students, so if you are in the Columbus area and are interested in learning more about our program for you or your son or daughter, do not hesitate to drop by). 

We will be playing the same game at the Global Intelligence Forum in Ireland in early July.  GIF is unquestionably my favorite conference (and not only because Mercyhurst sponsors it...). 

It is the only place I know where intel professionals from all over the world and from across all three major intelligence sub-disciplines - national security, law enforcement and business - meet to talk about how to improve the practice of intelligence.  It is exciting intellectually, in a beautiful town on the coast of Ireland, and is still small enough to actually get to know some people (some pretty interesting people, actually...) instead of just bumping into them.

This year, if The Mind's Lie works like I think it will, the participants will get the opportunity to walk away with a better ability to evaluate evidence in an unbiased manner as well - worth the price of admission, I think!

If you are in the Columbus area this weekend drop by.  We will be showcasing The Mind's Lie and all our other games for intelligence analysts in booth 745 in the exhibit hall.  If you haven't made plans to go to the Global Intelligence Forum, there is still time to register - hope to see you there!

Friday, October 22, 2010

"Effective Intelligence Analysis Is A Concept-driven Activity Rather Than A Data-driven One" (DTIC)

"The most telling result of the research is the clear implication that intelligence analysis is conceptually driven as opposed to data driven. What is critical is not just the data collected, but also what is added to those data in interpreting them via conceptual models in the analyst's store of knowledge."
In other words, how you think is more important than what you know.  This is one of the big take-aways from Philip Tetlock's wonderful Expert Political Judgment and you would be forgiven if you thought I was just touting his 2005 book again.

No, the quote above is from a 1979(!) INSCOM sponsored study into cognitive processes in intelligence analysis (called, amazingly enough, Cognitive Processes In Intelligence Analysis:  A Descriptive Model And Review Of The Literature.  It, and its companion piece, Human Processes In Intelligence Analysis:  Phase 1 Overview are available through DTIC or you can download them here and here from Scribd.  I wish I could say that I found them on my own but they come to me courtesy of Dalene Duvenage, who teaches intel analysis in South Africa, and the always useful IAFIE mailing list).

While much of the content in these two papers is probably more easily accessed by way of Dick Heuer's Psychology of Intelligence Analysis, there are some new gems here. One of my favorites is the reason for the studies.  

According to Dr. Joseph Zeidner, Chief Psychologist for the US Army in 1979, and MG William Rolya, the Commander of INSCOM at that time, "Intelligence collection systems have proliferated over the past several years, increasing in complexity and in volume of output.  However, there has been no corresponding improvement in the ability of intelligence personnel to analyze this flood of data."  Sound familiar?

Another interesting tidbit comes from the Human Processes paper which lays out the personality attributes of the ideal analyst.  These include:
  • Is a technologist
  • Is either a specialist or a generalist but not both
  • Is an "information entrepreneur"
  • Is comfortable with changing roles
  • Can communicate (oral and written)
  • Is a detective
  • Is imaginative
  • Is self-starting
  • Has a profession (Intelligence analysis)
These criteria seem to strike a chord as well.  All in all, both papers are worth a look, if only because they seem to prove that the more things change, the more they stay the same...
Enhanced by Zemanta

Thursday, May 27, 2010

The Effects Of Labels On Analysis (Thesis Months)

(Note: At the risk of making this an all-Jeff-Welgan blog, I thought this week I would cover Jeff's thesis work on the effects of labels on analysis right on the heels of last week's discussion of his work embedded in the new book, Hyperformance).

Does a name matter? Shakespeare says, "No, a rose by any other name would smell as sweet" but most psychologists would disagree. The well known "framing effect" shows that the way a question is asked can determine how people will answer it. Likewise, psychological campaigns aimed at dehumanizing an enemy often accompany wars.

Jeff Welgan, in his thesis called, The Effects Of Labels On Analysis, tests these ideas in the realm of intelligence analysis. Some of you may remember taking Jeff's survey last year. In it, he presented a fictitious scenario set in the Horn of Africa. Each participant was asked to read an identical report of an activity. The only thing that changed was the word used to describe the group conducting the activity. Specifically, Jeff tested the words "group", "insurgent", "rebel", "militia", or "terrorist". He hypothesized that the specific word used would affect the analytic conclusions that participants would draw.

Jeff did not aim his study at a random sample of the general population, however. He took pains to engage analysts in the national security realm, in law enforcement or in business. The results in the image to the right are self-reported (the inevitable cost of a web-based survey...) but he was fairly careful in his approach to getting participants. In all, some 233 of you participated in the experiment (Many thanks!).

Despite his hypotheses, it was unclear what he would actually find. These psychological biases are deep-seated and robust but, on the other hand, there is good research to suggest that credible evidence helps overcome framing issues and intel analysts are typically trained to be on the lookout for sources of bias. As Jeff stated, "My thesis will examine to what extent the quality of analysis is at risk, if it is indeed at risk, as the differing connotations of these labels would suggest."

In the end, the labels wound up making little difference for trained intel analysts. As Jeff bluntly stated, "My hypothesis that these particular labels have significant meaning, and many individuals have a preconceived idea, or cognitive biases, regarding the kinds of actions each of these particular groups conduct must be rejected at this time due to an overall lack in statistical significance across the labels."

This is clearly good news for the intel community at large. It certainly suggests that at least some of the training to defeat at least some of the cognitive biases is working.

The full text of the thesis is below or can be downloaded from Scribd.com.

The Effect of Labels on Analysis

Reblog this post [with Zemanta]

Monday, January 4, 2010

Heuer: How To Fix Intelligence Analysis With Structured Methods (NationalAcademies.org)

Richards Heuer (of Psychology Of Intelligence Analysis fame...) spoke last month at the National Academy of Sciences regarding his thoughts on how to improve intelligence analysis through the increased use of structured methods.

In the aftermath of the attempted bombing of Flight 253 on Christmas Day, it is worth reading Dick's words on how to improve the analytic side of the intelligence equation. I don't agree with everything he says (and say so in italicized parenthetical comments below) but he has been thinking clearly about these kinds of things for far longer than most of us. If you are concerned at all with reforming the way we do analysis, then this is a much better place to start than with all the noise being generated by the talking heads on TV.

I have embedded the whole document below this post or you can go to the National Academies site and listen to Dick's speech yourself. For those of you with too much to do and too little time, I have tried to pull out some of the highlights from Dick's paper below. I am not going to do it justice though, so, if you have the time, please read the entire document.

  • "If there is one thing you take away from my presentation to day, please let it be that structured analytic techniques are enablers of collaboration. They are the process by which effective collaboration occurs. Structured techniques and collaboration fit together like hand in glove, and they need to be promoted and developed together." (This is very consistent with what we see with our students here at Mercyhurst. Dick reports some anecdotal evidence to support his claim and it is exactly the same kinds of things we see with our young analysts).
  • "Unfortunately, the DNI leadership has not recognized this. For example, the DNI’s National Intelligence Strategy, Enterprise Objective 4 on improving integration and sharing, makes no mention of improving analytic methods."
  • "CIA, not the DNI, is the agency that has been pushing structured analysis. One important innovation at CIA is the development in various analytic offices of what are called tradecraft cells. These are small groups of analysts whose job it is to help other analysts decide which techniques are most appropriate, help guide the use of such techniques by inexperienced analysts, and often serve as facilitators of group processes. These tradecraft cells are a very helpful innovation that should spread to the other agencies." (Interesting. We called these "analytic coaches" and tried to get funding for them in our contract work for the government in 2005 -- and failed).
  • "I understand you are all concerned about evaluating whether these structured techniques actually work. So am I. I’d love to see our methods tested, especially the structured analytic techniques Randy and I have written about. The only testing the Intelligence Community has done is through the experience of using them, and I think we all agree that’s not adequate." (I suppose this is the comment that bothers me at the deepest level. It implies, to me, at least, that the IC doesn't know if any of its analytic methods work. What other 75 billion dollar a year enterprise can say that? What other 75 billion dollar a year enterprise wants to say that?)
  • "Some of you have emphasized the need to test the accuracy of these techniques. That would certainly be the ideal, but ideals are not always achievable." (Here I have to disagree with Dick. Philip Tetlock and Bruce Bueno De Mesquita have both made progress in this area and there is every reason to think that, with proper funding, such an effort would ultimately be successful. Tetlock recommended as much in a recent article. The key is to get started. The amount of money necessary to conduct this research is trivial compared to the amount spent on intel overall. Likewise, the payoff is enormous. As an investment it is a no-brainer, but until you try, you will not know)
  • "Unfortunately, there are major difficulties in testing structured techniques for accuracy, (for an outline of some of these, see my series of posts on evaluating intelligence) and the chances of such an approach having a significant favorable impact on how analysis is done are not very good. I see four reasons for this."
  • "1. Testing for accuracy is difficult because it assumes that the accuracy of intelligence judgments can be measured."
  • "2. There is a subset of analytic problems such as elections, when a definitive answer will be known in 6 or 12 months. Even in these cases there is a problem in measuring accuracy, because intelligence judgments are almost always probabilistic."
  • "3. A third reason why a major effort to evaluate the accuracy of structured analytic techniques may not be feasible stems from our experience that these techniques are most effective when used as part of a group process."

  • "4. If you are trying to change analysts’ behavior, which has to be the goal of such research, you are starting with at least one strike against you, as much of your target audience already has a firm opinion, based on their personal experience that they believe is more trustworthy than your research." (Here I have to disagree with Dick again. I think the goal of this research has to be to improve forecasting accuracy. If you can show analysts a method that has been demonstrated to improve forecasting accuracy -- to improve the analyst's "batting average" -- in real world conditions, I don't think you will have any problem changing their behavior.)
  • "As with the other examples, however, the Intel Community has no organizational unit that is funded and qualified to do that sort of testing."
  • "It (a referenced Wall St. Journal article) suggested that instead of estimating the likelihood that their plans will work, financial analysts should estimate the probability they might fail. That’s a good idea that could also be applied to intelligence analysis." (I am not sure why we can't do both. We currently teach at Mercyhurst that a "complete" estimate consists of both a statement of probability (i.e. the likelihood that X will or will not happen) and a statement of analytic confidence (i.e. how likely is that you, the analyst, are wrong in your estimate.)
  • "The kind of research I just talked about can and should be done in-house with the assistance of those who are directly responsible for implementing the findings." (I think that Dick is correct, that, at some point, it has to be done in-house. I do think, however, that the preliminary testing could be effectively done by colleges, universities and other research institutions. This has three big benefits. First, it means that many methods could be tested quickly and that only the most promising would move forward. Second, it would likely be less expensive to do the first stage testing in the open community than in the IC. Third, it allows the IC to extend its partnering and engagement activities with colleges, universities and research institutions.)
  • "Our forthcoming book has two major recommendations for DNI actions that we believe are needed to achieve the analytic transformation we would all like to see."
  • "1. The DNI needs to require that the National Intelligence Council set an example about the importance of analytic tradecraft. NIC projects are exactly the kind of projects for which structured techniques should always be used, and this is not happening now."
  • "2. The second recommendation is that the DNI should create what might be called a center for analytic tradecraft."


Complete text below:

The Evolution of Structured Analytic Techniques -- Richards Heuer -- 8 DEC 2009
Reblog this post [with Zemanta]