Saturday, May 15, 2010

Surreal Saturday: Flaming Pants Walking (YouTube)

I have no idea what this means but it seems entirely appropriate for finals week...

Wednesday, May 12, 2010

A Brilliant Failure (Thesis Months)

(Note: Due to circumstances entirely within my control, I have been pretty lax about getting these theses -- particularly this one, which is very cool -- out the door in a timely manner. No worries, though. "Thesis Month" is now "Thesis Months").

Researchers rarely like to publish their failures. Want some proof? Next time you pick up a journal check to see how many of the authors are reporting experimental results that do not tend to confirm their hypotheses.

Sometimes, however, failures are so unexpected and so complete that they force you to re-think your fundamental understanding of a topic.

Think about it: It is not unreasonable to assume that a 50 lb cannonball and a 5 lb cannon ball dropped from the Leaning Tower of Pisa will hit the earth at different times. For more than 1000 years, this Aristotelian view of the way the world worked dominated.

The first time someone tested this idea (and, apparently, it wasn't Galileo, though he typically gets the credit) and the objects hit the ground at the same time, people were forced to reconsider how gravity works.

Shannon Ferrucci's thesis, "Explicit Conceptual Models: Synthesizing Divergent And Convergent Thinking", is precisely this type of brilliant failure.

Shannon starts with a constructivist vision of how the mind works. She suggests that when an intelligence analyst receives a requirement, it activates a mental model of what is known about the target and what the analyst needs to know in order to properly answer the question. Such a model obviously grows and changes as new information comes in and is never really complete but it is equally obvious that such a model informs the analytic process.

For example, consider the question that was undoubtedly asked of a number of intel analysts last week: What is the likely outcome of the the elections in the UK?

Now, imagine an analyst that was rather new to the problem. The model in that person's head might have included a general notion about the parliamentary system in the UK, some information on the major parties, perhaps, and little more. This analyst would (or should) know that he or she needs to have a better grasp of the issues, personalities and electoral system in the UK before hazarding anything more than a personal opinion.

Imagine a second, similar, analyst but imagine that person with a significantly different model with respect to a crucial aspect of the election (For example, the first analyst believes that the elections can end in a hung parliament and the second analyst does not believe this to be the case).

Shannon argues that making these models explicit, that is getting them out of the analyst's head and onto paper, should improve intelligence analysis in a number of ways.

In the first place, making the models explicit highlights where different analysts disagree about how to think about a problem. At this early stage in the process, though, the disagreement simply becomes a collection requirement rather than the knock-down, drag-out fight it might evolve into in the later stages of a project.

Second, comparing these conceptual models among analysts allows all analysts to benefit from the good ideas and knowledge of others. I may be an expert in the parliamentary process and you may be an expert in the personalities prominent in the elections. Our joint mental model of the election should be more complete than either of us will produce on our own.

Third, making the model explicit should help analysts better assess the appropriate level of confidence they should have in their analysis. If you thought you needed to know five things in order to make a good analysis and you know all five and your sources are reliable, etc, you should arguably be more confident in your analysis than if you only knew two of those things and the sources were poor. Making the model explicit and updating it throughout the analytic process should allow this sort of assessment as well.

Finally, after the fact, these explicit models provide a unique sort of audit trail. Examining how the analysts on a project thought about the requirement may go a long way towards identifying the root causes of intelligence success or failure.

Of course, the ultimate test of an improvement to the analytic process is forecasting accuracy. While determining accuracy is fraught with difficulty, if this approach doesn't actually improve the analyst's ability to forecast more accurately, conducting these explicit modeling exercises might not be worth the time or resources.

So, it is a question worth asking: Does making the mental model explicit improve forecasting accuracy or not? Shannon clearly expected that it would.

She designed a clever experiment that asked a control group to forecast the winner of the elections in Zambia in October 2008. With the experimental group, however, she took them through an exercise that required students to create, at both the individual and group levels, robust concept maps of the issue. Crunched for time, her experiment focused primarily on capturing as many good ideas and the relationships between them as possible in the conceptual models the students designed (Remember this -- it turns out to be important).

Her results? Not what she expected...


In case you are missing it, the guys who explicitly modeled their problem did statistically significantly worse -- way worse -- than those that did not.

It took several weeks of picking through her results and examining her experimental design before she came up with an extremely important conclusion: Convergent thinking is as important as divergent thinking in intelligence analysis.

If that doesn't seem that dramatic to you, think about it for a minute. When was the last time you attended a "critical thinking" course which spent as much time on convergent methods as divergent ones? How many times have you heard that, in order to fix intelligence, "We need to connect more dots" or "We have to think outside the box" -- i.e. we need more divergent thinking? Off the top of your head, how many convergent thinking techniques can you even name?

Shannon's experiment, due to her time restrictions, focused almost exclusively on divergent thinking but, as Shannon wrote in her conclusion, "The generation of a multitude of ideas seemed to do little more than confuse and overwhelm experimental group participants."

Once she knew what to look for, additional supporting evidence was easy to find. Iyengar and Lepper's famous "jam experiment" and Tetlock's work refuting the value of scenario generating exercises both track closely to Shannon's results. There have even been anecdotal references to this phenomena within the intelligence literature.

But never has there been experimental evidence using a realistic intelligence problem to suggest that, as Shannon puts it, "Divergent thinking on its own appears to be a handicap, without some form of convergent thinking to counterbalance it. "

Interesting reading; I recommend it.

Explicit Conceptual Models: Synthesizing Divergent and Convergent Thinking


Reblog this post [with Zemanta]

Saturday, May 1, 2010

Surreal Saturday: Cupcake Cannon (YouTube)

OK. Cupcakes? Check. Cannon that shoots cupcakes with 125 psi pressure? Check. Camera that shoots slow-mo at 700 FPS? Check. Willing idiots? Check. Ready... (via Gizmodo)

Friday, April 23, 2010

Drive-by Reviews Of Analytic Methods (ADVAT.blogspot.com)

Everyone has heard of a drive-by shooting but what about a "drive-by review"?

I am teaching a graduate seminar in Advanced Analytic Techniques this term. The core of the course is a series of student projects that hyperfocus on the application of a particular analytic technique (such as patent analysis or social network analysis) to a discrete topic (such as the political situation in Turkey or the future of oil and gas exploration in the Caspian Sea). The best of these projects wind up in The Analyst's Cookbook.

Each week, however, in addition to diving deep into these individual techniques and topics, we also work as a group to come to some conclusions about a number of other techniques. In preparation, each of the students selects, reads and summarizes a number of articles on whichever technique is under the microscope for the week.

They then post these summaries and links to the full text of the articles on our Advanced Analytic Techniques blog. Each Thursday, we sit down and have a discussion about the readings. We also run a short exercise using the technique. From the combination of discussion and exercise, we try to answer four questions:

  • How do we define this technique?
  • What are the strengths and weaknesses of this technique?
  • How do you do this technique (Step by step)?
  • What was our experience like when we tried to apply this technique?
Once we think we have pretty good answers to these questions, we post what we have developed to the blog in order to capture our collective thinking on the technique in question.

Obviously, this is where the term "drive-by review" comes from. Such an exercise only serves to familiarize the students with the technique under consideration. The blog format, however, permits us to open this series of exercises up to practitioners, academics and intel studies students at other institutions for comment and additional insights -- which is what I am doing with this post.

This year, due to the very large size of the class, we are actually able to do a little comparative analysis. I have divided the team into two halves. We explore the techniques collectively but each team comes to it own conclusions independently. It is sort of like getting a second opinion after a visit to the doctor.

Last week we took a look at Delphi and this week we are examining Roleplaying. Over the last couple of weeks we have looked at Best Practices, Red Teaming and Imagery Analysis.

Don't hesitate to jump in! We learn from your experience and expertise.
Reblog this post [with Zemanta]

Monday, April 19, 2010

The Whole Of The Cyberthreat In A Single Tweet (Scribd.com)

According to ReadWriteWeb, Raffi Krikorian, a developer for Twitter, posted a complete version of a single "tweet", or 140 character Twitter message, this weekend on Scribd.com.

You can see the results for yourselves below:

map-of-a-tweet

In addition to the 140 (or less) characters in a tweet, this map shows all of the metadata thrown off by each and every post.

Some of this stuff is harmless but it is surprising how little metadata it takes to uniquely identify a particular computer. Don't believe me? Check out Panopticlick. Based on their fairly clever method, it only takes about 33 bits of data to uniquely ID a computer.

Note, I said ID the computer, not the user behind it. Likewise, knowing which 33 bits of data one needs to hide or dirty up helps the bad guys hide themselves and makes it difficult if not impossible to determine attribution by technical means alone.

More importantly, it leaves the rest of us, who do not know how much personal and identifying data we are providing, at the mercy if those who do. "Those who do" doesn't just include criminals either. It includes corporations and governments as well.

What to do about all of this is beyond me (though I think Jeff Carr at IntelFusion does some of the best thinking on the subject) but it is charts like this one that, for me, highlight the importance of this issue.
Reblog this post [with Zemanta]