Saturday, May 15, 2010

Surreal Saturday: Flaming Pants Walking (YouTube)

I have no idea what this means but it seems entirely appropriate for finals week...

Wednesday, May 12, 2010

A Brilliant Failure (Thesis Months)

(Note: Due to circumstances entirely within my control, I have been pretty lax about getting these theses -- particularly this one, which is very cool -- out the door in a timely manner. No worries, though. "Thesis Month" is now "Thesis Months").

Researchers rarely like to publish their failures. Want some proof? Next time you pick up a journal check to see how many of the authors are reporting experimental results that do not tend to confirm their hypotheses.

Sometimes, however, failures are so unexpected and so complete that they force you to re-think your fundamental understanding of a topic.

Think about it: It is not unreasonable to assume that a 50 lb cannonball and a 5 lb cannon ball dropped from the Leaning Tower of Pisa will hit the earth at different times. For more than 1000 years, this Aristotelian view of the way the world worked dominated.

The first time someone tested this idea (and, apparently, it wasn't Galileo, though he typically gets the credit) and the objects hit the ground at the same time, people were forced to reconsider how gravity works.

Shannon Ferrucci's thesis, "Explicit Conceptual Models: Synthesizing Divergent And Convergent Thinking", is precisely this type of brilliant failure.

Shannon starts with a constructivist vision of how the mind works. She suggests that when an intelligence analyst receives a requirement, it activates a mental model of what is known about the target and what the analyst needs to know in order to properly answer the question. Such a model obviously grows and changes as new information comes in and is never really complete but it is equally obvious that such a model informs the analytic process.

For example, consider the question that was undoubtedly asked of a number of intel analysts last week: What is the likely outcome of the the elections in the UK?

Now, imagine an analyst that was rather new to the problem. The model in that person's head might have included a general notion about the parliamentary system in the UK, some information on the major parties, perhaps, and little more. This analyst would (or should) know that he or she needs to have a better grasp of the issues, personalities and electoral system in the UK before hazarding anything more than a personal opinion.

Imagine a second, similar, analyst but imagine that person with a significantly different model with respect to a crucial aspect of the election (For example, the first analyst believes that the elections can end in a hung parliament and the second analyst does not believe this to be the case).

Shannon argues that making these models explicit, that is getting them out of the analyst's head and onto paper, should improve intelligence analysis in a number of ways.

In the first place, making the models explicit highlights where different analysts disagree about how to think about a problem. At this early stage in the process, though, the disagreement simply becomes a collection requirement rather than the knock-down, drag-out fight it might evolve into in the later stages of a project.

Second, comparing these conceptual models among analysts allows all analysts to benefit from the good ideas and knowledge of others. I may be an expert in the parliamentary process and you may be an expert in the personalities prominent in the elections. Our joint mental model of the election should be more complete than either of us will produce on our own.

Third, making the model explicit should help analysts better assess the appropriate level of confidence they should have in their analysis. If you thought you needed to know five things in order to make a good analysis and you know all five and your sources are reliable, etc, you should arguably be more confident in your analysis than if you only knew two of those things and the sources were poor. Making the model explicit and updating it throughout the analytic process should allow this sort of assessment as well.

Finally, after the fact, these explicit models provide a unique sort of audit trail. Examining how the analysts on a project thought about the requirement may go a long way towards identifying the root causes of intelligence success or failure.

Of course, the ultimate test of an improvement to the analytic process is forecasting accuracy. While determining accuracy is fraught with difficulty, if this approach doesn't actually improve the analyst's ability to forecast more accurately, conducting these explicit modeling exercises might not be worth the time or resources.

So, it is a question worth asking: Does making the mental model explicit improve forecasting accuracy or not? Shannon clearly expected that it would.

She designed a clever experiment that asked a control group to forecast the winner of the elections in Zambia in October 2008. With the experimental group, however, she took them through an exercise that required students to create, at both the individual and group levels, robust concept maps of the issue. Crunched for time, her experiment focused primarily on capturing as many good ideas and the relationships between them as possible in the conceptual models the students designed (Remember this -- it turns out to be important).

Her results? Not what she expected...


In case you are missing it, the guys who explicitly modeled their problem did statistically significantly worse -- way worse -- than those that did not.

It took several weeks of picking through her results and examining her experimental design before she came up with an extremely important conclusion: Convergent thinking is as important as divergent thinking in intelligence analysis.

If that doesn't seem that dramatic to you, think about it for a minute. When was the last time you attended a "critical thinking" course which spent as much time on convergent methods as divergent ones? How many times have you heard that, in order to fix intelligence, "We need to connect more dots" or "We have to think outside the box" -- i.e. we need more divergent thinking? Off the top of your head, how many convergent thinking techniques can you even name?

Shannon's experiment, due to her time restrictions, focused almost exclusively on divergent thinking but, as Shannon wrote in her conclusion, "The generation of a multitude of ideas seemed to do little more than confuse and overwhelm experimental group participants."

Once she knew what to look for, additional supporting evidence was easy to find. Iyengar and Lepper's famous "jam experiment" and Tetlock's work refuting the value of scenario generating exercises both track closely to Shannon's results. There have even been anecdotal references to this phenomena within the intelligence literature.

But never has there been experimental evidence using a realistic intelligence problem to suggest that, as Shannon puts it, "Divergent thinking on its own appears to be a handicap, without some form of convergent thinking to counterbalance it. "

Interesting reading; I recommend it.

Explicit Conceptual Models: Synthesizing Divergent and Convergent Thinking


Reblog this post [with Zemanta]