Monday, April 21, 2014

How To Analyze Black Swans

Black swans, we are told, are both rare and dramatic.  They exist but they are so uncommon that no one would predict that the next swan they would see would be a black one.  

The black swan has become a metaphor for the limits of the forecasting sciences.  At its best, it is a warning against overconfidence in intelligence analysis.  At its worst (and far too often it is at its worst), the black swan is an excuse for not having wrung every last bit of uncertainty out of an estimate before we make it.

One thing does seem clear, though:  We can have all the information and structured analytic techniques we want but we can’t do a damn thing in advance about true black swan events.  They are, by definition, unpredictable.

Or are they?

Imagine a single grain of sand falling on a table.

And then another.  And then another.  While it would take quite some time, eventually you would have… well… a pile of sand.

Now, imagine this pile of sand growing higher and higher as each single grain falls.  The grains balance precariously against each other, their uneven edges forming an unsteady network of weight and weaknesses, of strengths and stored energy, a network of near immeasurable complexity.  

Finally, the sandpile reaches a point where every time a single grain falls it triggers an avalanche.  The vast majority of times the avalanches are small, a few grains rolling down the side of the sandpile.  Occasionally, the avalanches are larger, a side of the pile collapsing, shearing off as if it had been cut away with a knife.

Every once in a while – every once in a long while – the falling of a single grain triggers a catastrophe and the entire pile collapses, spilling sand off the edge of the table and onto the floor.

The sandpile analogy is a classic in complexity science but I think it holds some deep lessons for intelligence analysts trying to understand black swan events.

Just as we cannot predict black swan events, we cannot predict which precise grain of sand will bring the whole sandpile down.

Yet, much of modern intelligence focuses almost exclusively on collecting and analyzing the grains of sand – the information stream that makes up all modern intelligence problems.  In essence, we spend millions, even billions, of dollars examining each grain, each piece of information, in detail, trying to figure out what it will likely do to the pile of sand, the crisis of the day.  We forecast modest changes, increased tensions, countless small avalanches and most of the time we are right (or right enough).

Yet, we still miss the fall of the Soviet Union in 1991, the Arab Spring of 2010, and the collapse of the sandpile that began with a single grain.

What can we do?  It seems as though intelligence analysts are locked in an intractable cycle, constant victim to the black swan.

What we can do is to move our focus away from the incessant drumbeat of events as they happen (i.e. the grains of sand) and re-focus our attention on the thing we can assess:  The sandpile. 

It turns out that “understanding the sandpile” is something that complexity scientists have been doing for quite some time.  We know more about it than you might think and what is known has real consequences for intelligence.

An example of a power law, or long-tail, distribution
In the first place, for the sandpile to exhibit this bizarre behavior where a single grain of sand can cause it to collapse or a single small incident can trigger a crisis, the pile has to be very steep.  Scientists call this being in the critical state or having a critical point.  More importantly, it is possible to know when a system such as our imaginary sandpile is in this critical state – the avalanches follow something called a “power law distribution”.

Remember how I described the avalanches earlier?  The vast majority were quite small, a few were of moderate size but only rarely, very rarely, did the pile completely collapse.  This is actually a pretty good description of a power law distribution.  

Lots of natural phenomena follow power laws.  Earthquakes are the best example.  There are many small earthquakes every day.  Every once in a while there is a moderate sized tremor but only rarely, fortunately, are there extremely large earthquakes.

The internet follows a power law (many websites with few links to them but only a few like Google or Amazon).  Wars, if we think about casualties, also follow a power law (There are a thousand Anglo-Zanzibar Wars or Wars of Julich Succession for every World War 2).  Even acts of terrorism follow a power law.

And the consequences of all this for intelligence analysts?  It fundamentally changes the question we ask ourselves.  It suggests we should focus less on the grains of sand and what impact they will have on the sandpile and spend more resources trying to understand the sandpile itself.

Consider the current crisis in Crimea.  It is tempting to watch each news report as it rolls in and to speculate on the effect of that piece of news on the crisis.  Will it make it worse or better?  And to what degree?  

But what of the sandpile?  Is the Crimean crisis in a critical state or not?  If it is, then it is also in a state where a black swan event could arise but the piece of news (i.e. particular grain of sand) that will cause it to appear is unpredictable.  If not, then perhaps there is more time (and maybe less to worry about).  

We may not be able to tell decisionmakers when the pile will collapse but we might be able to say that the sandpile is so carefully balanced that a single grain of sand will, eventually, cause it to collapse.  Efforts to alleviate the crisis, such as negotiated ceasefires and diplomatic talks, can be seen as ways of trying to take the system out of the critical state, of draining sand from the pile.

Modeling crises in this way puts a premium on context and not just collection.  What is more important is that senior decisionmakers know that this is what they need.  As then MG Michael Flynn noted in his 2010 report, Fixing Intel
"Ignorant of local economics and landowners, hazy about who the powerbrokers are and how they might be influenced, incurious about the correlations between various development projects and the levels of cooperation among villagers, and disengaged from people in the best position to find answers – whether aid workers or Afghan soldiers – U.S. intelligence officers and analysts can do little but shrug in response to high level decision-makers seeking the knowledge, analysis, and information they need to wage a successful counterinsurgency."
The bad news is that the science of complexity has not, to the best of my knowledge, been able to successfully model anything as complicated as a real-time political crisis.  That doesn't erase the value of the research so far, it only means that there is more research left to do.

In the meantime, analysts and decisionmakers should start to think more aggressively about what it really means to model the sandpile of real-world intelligence problems, comforted by the idea that there might finally be a useful way to analyze black swans.

6 comments:

Anonymous said...

Sir,

A better historical case study / example may be the 1979 Iranian Revolution that deposed the Shah and brought religious fundamentalism to power and thus resulted in continuing influence of militant Shia influence in the MENA.

Semper Fidelis,
MJD

Anonymous said...

"Two images, “black swans” and “perfect storms”, have struck the public’s imagination and are used –at times indiscriminately- to describe the unthinkable or the extremely unlikely. These metaphors have been used as excuses to wait for an accident to happen before taking risk management measures, both in industry and government. These two images represent two distinct types of uncertainties (epistemic and aleatory). Existing statistics are often insufficient to support risk management because the sample may be too small and the system may have changed. Rationality as defined by the von Neumann axioms(1) leads to a combination of both types of uncertainties into a single probability measure -Bayesian probability- and accounts only for risk aversion. Yet, the decision maker may also want to be ambiguity-averse. This paper presents an engineering risk analysis perspective on the problem, using all available information in support of proactive risk management decisions and considering both types of uncertainty." - On "Black swans" and "Perfect storms": Risk analysis and management when statistics are not enough, Pate-Cornell, Stanford University, Dec 2011

Even the so-called "Black swan" economic events of 2008 were not true "Black swans." There is ample evidence that the I&W were suppressed due to rampant conflicts of interest introduced into the systems effected that resulted from public policy impacts by "Economic Elite Domination" (Gilens & Page, I also have an old friend on K St who was involved in lobbying repeal of Glass-Steagall in the 90s which was a critical enabler...) Some even made money by correctly predicting and betting on the inevitable collapse of highly leveraged financial products, because they had done the proper analysis and reached logical conclusions. Just because the regulators and the rest of the public did not see the "writing on the wall" does not, in and of itself, support the label "Black swan."

Could the relative "success" of IC Prediction Markets have something more to do with the ability of these markets to navigate around political, ideological, and intellectual biases present in traditional public-sector analysis?

John Hoven said...

Among qualitative researchers, black swans and unknown unknowns are routine and expected. Their investigative methods are explicitly designed to discover answers to the questions we didn't think to ask.

Here are some excerpts from two of the leading methods texts:

"Quantitative methods assume that researchers already know both the key problems and the answer categories; these types of questions ... often missed turning points, subtleties, and cross pressures... We discovered quickly that our initial ideas of what we had to find out were often wrong, so the questions we had planned in advance would miss the mark... In exploratory studies, you listen for unanticipated material... In these cases, the follow-up questions may dominate the discussion..." (Rubin & Rubin. Qualitative Interviewing 2012: 9, 10, 122)

"Random sampling is a gold standard of quantitative research but is used quite minimally in qualitative research... Our sampling tends to be more strategic and purposive... At each step along the evidential trail, we are making sampling decisions to clarify the main patterns, see contrasts, identify exceptions or discrepant instances, and uncover negative instances... Three kinds of instances have great payoff. The first is the apparently “typical” or “representative” instance. If you can find it, try to find another one. The second is the “negative” or “disconfirming” instance; it gives you both the limits of your conclusions and the point of greatest variation. The third is the “exceptional” or “discrepant” instance... Any given finding usually has exceptions. The temptation is to smooth them over, ignore them, or explain them away. But the outlier is your friend... It not only tests the generality of the finding but also protects you against self-selecting biases and may help you build a better explanation." (Miles, Huberman, and SaldaƱa. Qualitative data analysis 2013: 32, 33, 36, 301)

Anonymous said...

The sand pile alrady collpased in Ukraine. Successive rounds of NATO expansion, EU expansion, color revolutions, missile defense, and other Western (and some Russian) policies were the grains of sand. The precipitating event/grain of sand was the coup in Kiev. Gordon Hahn

Anonymous said...

Black swan events, as described in the blog post “How To Analyze Black Swans” by Kristen Wheaton, is an incredibly intelligent metaphor used by intelligence analysts when attempting to predict what may or may not occur. Wheaton writes that “black swans … exist but are so uncommon that no one would predict that the next swan they would see would be a black one”. Predictability of an event, or lack of, is an important aspect of intelligence analysis.
This is discussed in great detail in Mark M. Lowenthal’s book Intelligence: From Secrets to Policy. Lowenthal states that by looking at the “likelihood of an event and its relative importance to national security”, one may have a better chance of predicting probability. He explains this through the Importance Versus Likelihood model, which is a method used to determine the importance of and likelihood of an event which has different levels of importance and different levels of probability. For example, an event that is categorized as high importance and high likelihood is one of great concern to both analysts and to the policy makers, but an event that is categorized as low importance and low likelihood is of less concern. The problem with this model is that these aforementioned black swan events occur unexpectedly and unpredictably and disrupt the predictability model.
The black swan metaphor is important to remember in intelligence analysis because it instructs analysts against overconfidence when it comes to predicting events which may or may occur. Greg Fyffe’s response to Wheaton’s blog post states that “when an assessment failure is reviewed we seen a sequence of events that did occur, and look at the information that the analyst could have used, and the possibilities that could have been taken more seriously”. Black swan events are a learning tool as they allow for assessment and review and this is the consolation to these unexpected black swan events.

Wheaton, Kristen (2014, April 21). How to Analyze Black Swan [Blog post]. Retrieved from: http://sourcesandmethods.blogspot.ca/
Lowenthal, Matt M. Intelligence: From Secrets to Policy (5th Ed) (Thousand Oaks: CQ Press, 2012). pp.60.
Lowenthal, Matt M. Intelligence: From Secrets to Policy (5th Ed) (Thousand Oaks: CQ Press, 2012). pp.60.
Fyffe, Greg (2014, April 23). Intelligence Assessment and Unpredictability (Follow Up To: How To Analyze Black Swans) [Blog post]. Retrieved from: http://sourcesandmethods.blogspot.ca/

Unknown said...

Hi Kris,

I am also working on this problem. Check out my blog entry on a potential technique for bracketing black swans: http://research.ridgway.pitt.edu/blog/category/steve-coultharts-analysis/

In the next month we will be posting an example of this technique.

Best,
Steve