Wednesday, April 23, 2014

Intelligence Assessment And Unpredictability (Follow Up To: How To Analyze Black Swans)

(Note:  I have known Greg Fyffe, former Executive Director of the Intelligence Assessment Secretariat in the Privy Council Office in Canada for a number of years.  His long experience in intelligence and specifically in intelligence analysis places him in a unique position with respect to the Black Swan issue.  He took his time to share some of his thoughts with me and others in the International Association For Intelligence Education Forum recently.  I asked him if he would allow me to re-print his comments here and he agreed.)

By Greg Fyffe
(Executive Director, Intelligence Assessment Secretariat, Privy Council Office, Ottawa, 2000-2008)

Kris Wheaton’s reflection on his class discussion (How to Analyze Black Swans) is useful in understanding the challenge of elaborating future possibilities in intelligence assessments. Mark Lowenthal’s comment on Black Swans also illuminates an important aspect of uncertainty—while a situation may be completely mysterious to some observers, others may be very familiar with every aspect of it. Their knowledge, however, may not be transferable to intelligence analysts. 

While many assessments provide useful contextual background, the most helpful to clients are those which are insightful in forecasting what will happen next. This is obviously not always easy. These are my observations based on eight years as the head of an intelligence analysis unit. 

Some events are high impact and high probability. Analysts may not be alone in seeing the significance of a developing situation, but if they can add evidence to the estimate, that is helpful; additional evidence that an event may well occur justifies more definite action to prepare for it.  

Analysts frequently need to look at possibilities which are low probability but high impact.  There are many ways of doing this, including scenario-building, or other speculative techniques, but since there are many significant but low probability possibilities, there are obvious risks in taking expensive preventative action.

Kris uses the pile of sand metaphor to describe a situation which appears increasingly volatile. For analysts this is a very common analytical challenge. The possible event is high impact, high probability but also unpredictable. Even though a possible outcome appears highly probable, it may never occur, or it may be triggered by random events, so even though analysts should have seen it coming, they could not. A common example of this is an authoritarian regime. Rulers are unpopular, the economy is distorted, security services are cruel and hated, and the population is unable to express dissent. If something triggers a revolt a mass uprising can quickly overturn a regime which seemed in complete control. But, sometimes the trigger event never happens and an authoritarian regime is able to stay in power for a very long period. The fall of the Soviet Union is an example of this. Analysts could describe the vulnerabilities of the regime, but the regime’s ability to contain those vulnerabilities was effective for decades. 

Maybe the eventual outcome is truly inevitable in the long run, but if the time possibilities range over many years, then analysts cannot offer much useful anticipation to clients. In these cases the role of analysis is “building block” assessments, elaborating the reasons why the underlying trends in a political system are potentially explosive, describing possible warning signs for the explosion, and paying close attention to intelligence or diplomatic reporting that may suggest the explosion is an imminent possibility. In the end, the trigger may be something that was not foreseen, because it could not be foreseen. 

A Tunisian vendor burns himself alive, and suddenly becomes the symbol and rallying point for a suppressed population.  

The classic challenge for intelligence systems is the surprise planned by a few. An event seems unlikely or illogical, but a few do know, and intelligence about their knowledge could make a difference. This category could be broken down into the monitored and unmonitored. In the former possible perpetrators are being watched (as in 911) but their actions are still a surprise. In the unmonitored situation the actors were not being tracked and their actions were therefore not anticipated. A new terrorist group, in a region not considered at risk, is only seen after its first attack. Events in these two categories are at least potential knowable, since there are people with knowledge and they are probably communicating. However, intelligence systems, unless blessed by luck, do not have the resources to look everywhere with intensity, and if they do, the amount of information produced may simply lead to another type of assessment dilemma—unmanageable volume. 

Some events with high impact are completely unpredictable. These might include the early death of an important leader, some types of natural disasters, an act by someone who was mentally disturbed, or more frequently, the third and fourth order consequences of an event which were not foreseen because the possible combinations after the initial event were simply too numerous or hidden to comprehend. The possibilities extend beyond imagination. 

What do these variations mean for intelligence systems and particularly for intelligence assessment? 

  • First, attempts to guess, ask “what if” questions, and other speculative techniques, are necessary to identify possibilities, particularly those worth further analysis.
  • Second, background pieces which identify curious combinations of circumstances, inexplicable course of action by key actors, or updated contextual information, are necessary to track pressures which may be building to a critical point.
  • Third, background and speculative pieces need to be widely shared so that the possibility increases that someone will have additional insight into what they mean.
  • Fourth, there will always be events that were not foreseen. The challenge for the analyst is to quickly assess their significance and describe them vividly. 
  • Fifth, we cannot foresee all the possible consequences of an event, but if analysts are alert to possibilities they can forecast as events develop. As one possibility becomes a reality and others are eliminated, analysts can then predict another set of possibilities. It is very difficult to see third and fourth order consequences, but it is helpful to clients if the running forecast can see the immediate and secondary consequences of a developing story. 
  • Sixth, some high probability events will never occur, despite all the evidence that they should. Analysis typically describes pressures that are building, but also the counter-forces that may contain them. A prediction can be made on the balance of probabilities or apparent trends, but in the end the pressures for change do not produce change. A volatile unstable situation continues to be volatile and unstable. 

We often refer to “noise” in intelligence as the reason for not seeing what was about to happen—there are so many bits of information about future developments that the truly significant pieces are only seen clearly after events have taken place. The other dimension of this dilemma is that the analyst faces a very large number of future possibilities, whatever their level of probability, and whether or not there is supportive evidence. 

When an assessment failure is reviewed we seen a sequence of events that did occur, and look at the information that the analyst could have used, and the possibilities that could have been taken more seriously. 


The analyst asking important questions often faces a vast amount of information of marginal utility or reliability, not enough high relevance and high quality intelligence, and a large number of plausible futures. The truly gifted individual, or the truly functional assessment process, is still able to assess likely outcomes often enough to be indispensable to decision-makers. 

Monday, April 21, 2014

How To Analyze Black Swans

Black swans, we are told, are both rare and dramatic.  They exist but they are so uncommon that no one would predict that the next swan they would see would be a black one.  

The black swan has become a metaphor for the limits of the forecasting sciences.  At its best, it is a warning against overconfidence in intelligence analysis.  At its worst (and far too often it is at its worst), the black swan is an excuse for not having wrung every last bit of uncertainty out of an estimate before we make it.

One thing does seem clear, though:  We can have all the information and structured analytic techniques we want but we can’t do a damn thing in advance about true black swan events.  They are, by definition, unpredictable.

Or are they?

Imagine a single grain of sand falling on a table.

And then another.  And then another.  While it would take quite some time, eventually you would have… well… a pile of sand.

Now, imagine this pile of sand growing higher and higher as each single grain falls.  The grains balance precariously against each other, their uneven edges forming an unsteady network of weight and weaknesses, of strengths and stored energy, a network of near immeasurable complexity.  

Finally, the sandpile reaches a point where every time a single grain falls it triggers an avalanche.  The vast majority of times the avalanches are small, a few grains rolling down the side of the sandpile.  Occasionally, the avalanches are larger, a side of the pile collapsing, shearing off as if it had been cut away with a knife.

Every once in a while – every once in a long while – the falling of a single grain triggers a catastrophe and the entire pile collapses, spilling sand off the edge of the table and onto the floor.

The sandpile analogy is a classic in complexity science but I think it holds some deep lessons for intelligence analysts trying to understand black swan events.

Just as we cannot predict black swan events, we cannot predict which precise grain of sand will bring the whole sandpile down.

Yet, much of modern intelligence focuses almost exclusively on collecting and analyzing the grains of sand – the information stream that makes up all modern intelligence problems.  In essence, we spend millions, even billions, of dollars examining each grain, each piece of information, in detail, trying to figure out what it will likely do to the pile of sand, the crisis of the day.  We forecast modest changes, increased tensions, countless small avalanches and most of the time we are right (or right enough).

Yet, we still miss the fall of the Soviet Union in 1991, the Arab Spring of 2010, and the collapse of the sandpile that began with a single grain.

What can we do?  It seems as though intelligence analysts are locked in an intractable cycle, constant victim to the black swan.

What we can do is to move our focus away from the incessant drumbeat of events as they happen (i.e. the grains of sand) and re-focus our attention on the thing we can assess:  The sandpile. 

It turns out that “understanding the sandpile” is something that complexity scientists have been doing for quite some time.  We know more about it than you might think and what is known has real consequences for intelligence.

An example of a power law, or long-tail, distribution
In the first place, for the sandpile to exhibit this bizarre behavior where a single grain of sand can cause it to collapse or a single small incident can trigger a crisis, the pile has to be very steep.  Scientists call this being in the critical state or having a critical point.  More importantly, it is possible to know when a system such as our imaginary sandpile is in this critical state – the avalanches follow something called a “power law distribution”.

Remember how I described the avalanches earlier?  The vast majority were quite small, a few were of moderate size but only rarely, very rarely, did the pile completely collapse.  This is actually a pretty good description of a power law distribution.  

Lots of natural phenomena follow power laws.  Earthquakes are the best example.  There are many small earthquakes every day.  Every once in a while there is a moderate sized tremor but only rarely, fortunately, are there extremely large earthquakes.

The internet follows a power law (many websites with few links to them but only a few like Google or Amazon).  Wars, if we think about casualties, also follow a power law (There are a thousand Anglo-Zanzibar Wars or Wars of Julich Succession for every World War 2).  Even acts of terrorism follow a power law.

And the consequences of all this for intelligence analysts?  It fundamentally changes the question we ask ourselves.  It suggests we should focus less on the grains of sand and what impact they will have on the sandpile and spend more resources trying to understand the sandpile itself.

Consider the current crisis in Crimea.  It is tempting to watch each news report as it rolls in and to speculate on the effect of that piece of news on the crisis.  Will it make it worse or better?  And to what degree?  

But what of the sandpile?  Is the Crimean crisis in a critical state or not?  If it is, then it is also in a state where a black swan event could arise but the piece of news (i.e. particular grain of sand) that will cause it to appear is unpredictable.  If not, then perhaps there is more time (and maybe less to worry about).  

We may not be able to tell decisionmakers when the pile will collapse but we might be able to say that the sandpile is so carefully balanced that a single grain of sand will, eventually, cause it to collapse.  Efforts to alleviate the crisis, such as negotiated ceasefires and diplomatic talks, can be seen as ways of trying to take the system out of the critical state, of draining sand from the pile.

Modeling crises in this way puts a premium on context and not just collection.  What is more important is that senior decisionmakers know that this is what they need.  As then MG Michael Flynn noted in his 2010 report, Fixing Intel
"Ignorant of local economics and landowners, hazy about who the powerbrokers are and how they might be influenced, incurious about the correlations between various development projects and the levels of cooperation among villagers, and disengaged from people in the best position to find answers – whether aid workers or Afghan soldiers – U.S. intelligence officers and analysts can do little but shrug in response to high level decision-makers seeking the knowledge, analysis, and information they need to wage a successful counterinsurgency."
The bad news is that the science of complexity has not, to the best of my knowledge, been able to successfully model anything as complicated as a real-time political crisis.  That doesn't erase the value of the research so far, it only means that there is more research left to do.

In the meantime, analysts and decisionmakers should start to think more aggressively about what it really means to model the sandpile of real-world intelligence problems, comforted by the idea that there might finally be a useful way to analyze black swans.