Wednesday, April 23, 2014

Intelligence Assessment And Unpredictability (Follow Up To: How To Analyze Black Swans)

(Note:  I have known Greg Fyffe, former Executive Director of the Intelligence Assessment Secretariat in the Privy Council Office in Canada for a number of years.  His long experience in intelligence and specifically in intelligence analysis places him in a unique position with respect to the Black Swan issue.  He took his time to share some of his thoughts with me and others in the International Association For Intelligence Education Forum recently.  I asked him if he would allow me to re-print his comments here and he agreed.)

By Greg Fyffe
(Executive Director, Intelligence Assessment Secretariat, Privy Council Office, Ottawa, 2000-2008)

Kris Wheaton’s reflection on his class discussion (How to Analyze Black Swans) is useful in understanding the challenge of elaborating future possibilities in intelligence assessments. Mark Lowenthal’s comment on Black Swans also illuminates an important aspect of uncertainty—while a situation may be completely mysterious to some observers, others may be very familiar with every aspect of it. Their knowledge, however, may not be transferable to intelligence analysts. 

While many assessments provide useful contextual background, the most helpful to clients are those which are insightful in forecasting what will happen next. This is obviously not always easy. These are my observations based on eight years as the head of an intelligence analysis unit. 

Some events are high impact and high probability. Analysts may not be alone in seeing the significance of a developing situation, but if they can add evidence to the estimate, that is helpful; additional evidence that an event may well occur justifies more definite action to prepare for it.  

Analysts frequently need to look at possibilities which are low probability but high impact.  There are many ways of doing this, including scenario-building, or other speculative techniques, but since there are many significant but low probability possibilities, there are obvious risks in taking expensive preventative action.

Kris uses the pile of sand metaphor to describe a situation which appears increasingly volatile. For analysts this is a very common analytical challenge. The possible event is high impact, high probability but also unpredictable. Even though a possible outcome appears highly probable, it may never occur, or it may be triggered by random events, so even though analysts should have seen it coming, they could not. A common example of this is an authoritarian regime. Rulers are unpopular, the economy is distorted, security services are cruel and hated, and the population is unable to express dissent. If something triggers a revolt a mass uprising can quickly overturn a regime which seemed in complete control. But, sometimes the trigger event never happens and an authoritarian regime is able to stay in power for a very long period. The fall of the Soviet Union is an example of this. Analysts could describe the vulnerabilities of the regime, but the regime’s ability to contain those vulnerabilities was effective for decades. 

Maybe the eventual outcome is truly inevitable in the long run, but if the time possibilities range over many years, then analysts cannot offer much useful anticipation to clients. In these cases the role of analysis is “building block” assessments, elaborating the reasons why the underlying trends in a political system are potentially explosive, describing possible warning signs for the explosion, and paying close attention to intelligence or diplomatic reporting that may suggest the explosion is an imminent possibility. In the end, the trigger may be something that was not foreseen, because it could not be foreseen. 

A Tunisian vendor burns himself alive, and suddenly becomes the symbol and rallying point for a suppressed population.  

The classic challenge for intelligence systems is the surprise planned by a few. An event seems unlikely or illogical, but a few do know, and intelligence about their knowledge could make a difference. This category could be broken down into the monitored and unmonitored. In the former possible perpetrators are being watched (as in 911) but their actions are still a surprise. In the unmonitored situation the actors were not being tracked and their actions were therefore not anticipated. A new terrorist group, in a region not considered at risk, is only seen after its first attack. Events in these two categories are at least potential knowable, since there are people with knowledge and they are probably communicating. However, intelligence systems, unless blessed by luck, do not have the resources to look everywhere with intensity, and if they do, the amount of information produced may simply lead to another type of assessment dilemma—unmanageable volume. 

Some events with high impact are completely unpredictable. These might include the early death of an important leader, some types of natural disasters, an act by someone who was mentally disturbed, or more frequently, the third and fourth order consequences of an event which were not foreseen because the possible combinations after the initial event were simply too numerous or hidden to comprehend. The possibilities extend beyond imagination. 

What do these variations mean for intelligence systems and particularly for intelligence assessment? 

  • First, attempts to guess, ask “what if” questions, and other speculative techniques, are necessary to identify possibilities, particularly those worth further analysis.
  • Second, background pieces which identify curious combinations of circumstances, inexplicable course of action by key actors, or updated contextual information, are necessary to track pressures which may be building to a critical point.
  • Third, background and speculative pieces need to be widely shared so that the possibility increases that someone will have additional insight into what they mean.
  • Fourth, there will always be events that were not foreseen. The challenge for the analyst is to quickly assess their significance and describe them vividly. 
  • Fifth, we cannot foresee all the possible consequences of an event, but if analysts are alert to possibilities they can forecast as events develop. As one possibility becomes a reality and others are eliminated, analysts can then predict another set of possibilities. It is very difficult to see third and fourth order consequences, but it is helpful to clients if the running forecast can see the immediate and secondary consequences of a developing story. 
  • Sixth, some high probability events will never occur, despite all the evidence that they should. Analysis typically describes pressures that are building, but also the counter-forces that may contain them. A prediction can be made on the balance of probabilities or apparent trends, but in the end the pressures for change do not produce change. A volatile unstable situation continues to be volatile and unstable. 

We often refer to “noise” in intelligence as the reason for not seeing what was about to happen—there are so many bits of information about future developments that the truly significant pieces are only seen clearly after events have taken place. The other dimension of this dilemma is that the analyst faces a very large number of future possibilities, whatever their level of probability, and whether or not there is supportive evidence. 

When an assessment failure is reviewed we seen a sequence of events that did occur, and look at the information that the analyst could have used, and the possibilities that could have been taken more seriously. 


The analyst asking important questions often faces a vast amount of information of marginal utility or reliability, not enough high relevance and high quality intelligence, and a large number of plausible futures. The truly gifted individual, or the truly functional assessment process, is still able to assess likely outcomes often enough to be indispensable to decision-makers. 

Monday, April 21, 2014

How To Analyze Black Swans

Black swans, we are told, are both rare and dramatic.  They exist but they are so uncommon that no one would predict that the next swan they would see would be a black one.  

The black swan has become a metaphor for the limits of the forecasting sciences.  At its best, it is a warning against overconfidence in intelligence analysis.  At its worst (and far too often it is at its worst), the black swan is an excuse for not having wrung every last bit of uncertainty out of an estimate before we make it.

One thing does seem clear, though:  We can have all the information and structured analytic techniques we want but we can’t do a damn thing in advance about true black swan events.  They are, by definition, unpredictable.

Or are they?

Imagine a single grain of sand falling on a table.

And then another.  And then another.  While it would take quite some time, eventually you would have… well… a pile of sand.

Now, imagine this pile of sand growing higher and higher as each single grain falls.  The grains balance precariously against each other, their uneven edges forming an unsteady network of weight and weaknesses, of strengths and stored energy, a network of near immeasurable complexity.  

Finally, the sandpile reaches a point where every time a single grain falls it triggers an avalanche.  The vast majority of times the avalanches are small, a few grains rolling down the side of the sandpile.  Occasionally, the avalanches are larger, a side of the pile collapsing, shearing off as if it had been cut away with a knife.

Every once in a while – every once in a long while – the falling of a single grain triggers a catastrophe and the entire pile collapses, spilling sand off the edge of the table and onto the floor.

The sandpile analogy is a classic in complexity science but I think it holds some deep lessons for intelligence analysts trying to understand black swan events.

Just as we cannot predict black swan events, we cannot predict which precise grain of sand will bring the whole sandpile down.

Yet, much of modern intelligence focuses almost exclusively on collecting and analyzing the grains of sand – the information stream that makes up all modern intelligence problems.  In essence, we spend millions, even billions, of dollars examining each grain, each piece of information, in detail, trying to figure out what it will likely do to the pile of sand, the crisis of the day.  We forecast modest changes, increased tensions, countless small avalanches and most of the time we are right (or right enough).

Yet, we still miss the fall of the Soviet Union in 1991, the Arab Spring of 2010, and the collapse of the sandpile that began with a single grain.

What can we do?  It seems as though intelligence analysts are locked in an intractable cycle, constant victim to the black swan.

What we can do is to move our focus away from the incessant drumbeat of events as they happen (i.e. the grains of sand) and re-focus our attention on the thing we can assess:  The sandpile. 

It turns out that “understanding the sandpile” is something that complexity scientists have been doing for quite some time.  We know more about it than you might think and what is known has real consequences for intelligence.

An example of a power law, or long-tail, distribution
In the first place, for the sandpile to exhibit this bizarre behavior where a single grain of sand can cause it to collapse or a single small incident can trigger a crisis, the pile has to be very steep.  Scientists call this being in the critical state or having a critical point.  More importantly, it is possible to know when a system such as our imaginary sandpile is in this critical state – the avalanches follow something called a “power law distribution”.

Remember how I described the avalanches earlier?  The vast majority were quite small, a few were of moderate size but only rarely, very rarely, did the pile completely collapse.  This is actually a pretty good description of a power law distribution.  

Lots of natural phenomena follow power laws.  Earthquakes are the best example.  There are many small earthquakes every day.  Every once in a while there is a moderate sized tremor but only rarely, fortunately, are there extremely large earthquakes.

The internet follows a power law (many websites with few links to them but only a few like Google or Amazon).  Wars, if we think about casualties, also follow a power law (There are a thousand Anglo-Zanzibar Wars or Wars of Julich Succession for every World War 2).  Even acts of terrorism follow a power law.

And the consequences of all this for intelligence analysts?  It fundamentally changes the question we ask ourselves.  It suggests we should focus less on the grains of sand and what impact they will have on the sandpile and spend more resources trying to understand the sandpile itself.

Consider the current crisis in Crimea.  It is tempting to watch each news report as it rolls in and to speculate on the effect of that piece of news on the crisis.  Will it make it worse or better?  And to what degree?  

But what of the sandpile?  Is the Crimean crisis in a critical state or not?  If it is, then it is also in a state where a black swan event could arise but the piece of news (i.e. particular grain of sand) that will cause it to appear is unpredictable.  If not, then perhaps there is more time (and maybe less to worry about).  

We may not be able to tell decisionmakers when the pile will collapse but we might be able to say that the sandpile is so carefully balanced that a single grain of sand will, eventually, cause it to collapse.  Efforts to alleviate the crisis, such as negotiated ceasefires and diplomatic talks, can be seen as ways of trying to take the system out of the critical state, of draining sand from the pile.

Modeling crises in this way puts a premium on context and not just collection.  What is more important is that senior decisionmakers know that this is what they need.  As then MG Michael Flynn noted in his 2010 report, Fixing Intel
"Ignorant of local economics and landowners, hazy about who the powerbrokers are and how they might be influenced, incurious about the correlations between various development projects and the levels of cooperation among villagers, and disengaged from people in the best position to find answers – whether aid workers or Afghan soldiers – U.S. intelligence officers and analysts can do little but shrug in response to high level decision-makers seeking the knowledge, analysis, and information they need to wage a successful counterinsurgency."
The bad news is that the science of complexity has not, to the best of my knowledge, been able to successfully model anything as complicated as a real-time political crisis.  That doesn't erase the value of the research so far, it only means that there is more research left to do.

In the meantime, analysts and decisionmakers should start to think more aggressively about what it really means to model the sandpile of real-world intelligence problems, comforted by the idea that there might finally be a useful way to analyze black swans.

Thursday, April 17, 2014

How To Get An Intel Job Using LinkedIn

Most job seekers are very familiar with job related search engines such as Monster or Indeed or, among intel professionals, sites such as USAJobs.  One of the most effective and efficient websites for finding a job, however, is LinkedIn, and it is surprising how many job hunters fail to take advantage of its powerful tools for searching and applying for intel related jobs.

There are, of course, some legitimate security concerns among some sectors of the intelligence profession regarding social media sites like LinkedIn.  However, for the vast majority of job seekers - particularly entry-level job seekers and particularly for those interested in intelligence in business positions - LinkedIn is a must-use site.

Much of the power of LinkedIn comes from the size of an individual's network.  The larger and more diverse the network, the better the results.  Thus, the first step for individuals trying to break into intelligence is to build a network.  The time to do this is not three months before graduation, however.  

Intelligence professionals are a cautious bunch and are highly unlikely to take kindly to spam-y requests to connect from unknown or unverified contacts.  A far better avenue is to start early in your academic career and to build contacts slowly.  Start with professors, alumni or others you know who are already in the business or already have contacts with intelligence professionals.  Join organizations such as the SCIP and IACA.  But don't just join them - become as active in your local chapter as you can.

Within LinkedIn, try to identify some specialty groups where active discussions take place in an area of the intelligence field in which you are interested.  There are a number of these groups and joining them, listening to others discuss the issues of the day and, eventually, contributing to them with cogent, well-thought out observations of your own is a good way to steadily build a reputation within the community.

All this is the undercard, however, to the main event:  The LinkedIn job search feature.

At the top of every LinkedIn page is a Jobs Link.  Clicking on that link takes you to a jobs page with a number of interesting sections.  First is the Jobs You May be Interested In section.  Here, based on your profile, LinkedIn tries to guess which jobs might be right for you.  

LinkedIn also looks for commonalities among employers within your network (under the premise, perhaps, that if all your professional contacts work for Company X, you might be interested in a job there as well) and offers job recommendations along those lines in its Discover Jobs in Your Network section.  If your network is still small (and it may well be if you are looking for an entry level job), this latter section may not be much help.

You can also search for jobs in the search box near the top of the page but this search box defaults to a local search.  If you live in a small town, such a narrow area search is unlikely to reveal many intelligence related jobs.

However, the search bar that always sits at the very top of the page allows you to easily conduct national level searches for jobs.  Getting good results typically depends on which search terms you use (a more difficult question in the world of intelligence in business...) but a good starting place is simply "intelligence analyst".

LinkedIn will typically generate a number of jobs from this kind of search.  If it generates too many or the search needs to be otherwise focused, you can use the filters found on the left hand side of the page.  Two of the most useful are Experience and Relationships.  Experience is easy to understand and LinkedIn helpfully tells you how many jobs there are currently within each experience level.  For example, in the screenshot on the left (taken from my profile), you can see LinkedIn found 127 entry-level jobs.  

The Relationships filter helps you understand who you know that works at the same company or organization as the job advertised.  For example, I have a first level connection to 351 individuals at companies or organizations who are currently offering intelligence analyst jobs.  Knowing that someone in your network works at a company you are considering is very helpful.  You can ping them for insights about what it is like to work at that company or, if they know you well (not always true on social networks like LinkedIn), they might be able to refer you within their own system, giving you somewhat of an edge in the hiring process.

The Job Profile itself offers similar information.  First, a simple click on the People In Your Network link (if any) will let you know who you know who works at a given company or organization.  It will also give you insight into the second level contacts to which you might be able to be introduced. More importantly, perhaps, for entry-level job seekers with smaller networks is the link to Similar Jobs.

Finally, once you have selected a job of interest, be sure to look for the Apply Now button.  This allows you to apply for the job directly through LinkedIn.  Otherwise, you typically have to apply on the company's website.  A final hint is to check LinkedIn jobs often.  In the two days it has taken me to write this post, I have seen a number of jobs pop up and disappear.


This is just a start, however.  There are any number of other tips and tricks to using LinkedIn to get a job.  If you know any, please leave a comment!

Monday, April 14, 2014

Forecasting Recessions: Economists' Record Of Failure "Unblemished"

From:  http://www.voxeu.org/article/predicting-
economic-turning-points
"All my economist's say, 'On the one hand or the other.'  What I need is a one-handed economist!" -- Harry Truman.

Sorry, Harry, apparently even that won't help.

A new study out the Centre for Economic Policy Research in the UK indicates that, for the last thirty years at least, "the record of failure to predict recessions is virtually unblemished."

Ouch!

Take a look at the chart to the right.  It is a little hard to interpret but it starts about 2 years prior to the onset of an average of all the recessions over the last 20 years.  The top blue line represents the "normal" evolution of forecasts regarding GDP growth, that is, in a non-recessionary environment.  On average this is about 3% per year across all of the countries studied.

The forecasts from economists - the red bars - start out pretty close to this norm but begin to drop below the norm at the 8-10 month point.  While, on average, the forecasts continue to decline over the year preceding a recession, they still miss the mark (albeit slightly) even at the end of the year.  In other words, they get less wrong by the end of the year but they are still all - as in all - wrong.  The authors indicate that this paper replicates the results found by a 1990's paper that looked at the same effect over an earlier time period.  The effect is even worse when looking at recessions that develop after banking crises.
Note:  The bottom blue line which shows the actual average GDP growth is positive because, as the authors point out:  "on average, growth is not negative during recessions in advanced economies because the dating of recession episodes is based on the quarterly data and annual growth tends to remains positive during many recessions."  'Nuff said.
The authors also add that there are three schools of thought about why these forecasts are so uniformly incorrect:  Economists don't have enough information, don't have the incentive or aren't good enough Bayesians (i.e. hold on to their priors too long) to make accurate forecasts.  The jury is still out with regard to the actual reason but the effect seems like the kind of thing an intel analyst would want to account for when using macroeconomic forecasts in other than business analyses.

(Tip of the Hat to Allen T. for the link!)

Friday, April 11, 2014

Another First For Mercyhurst! School Of Intelligence Studies and Information Sciences Announced Today!

Tom Ridge, Former PA Governor and first Secretary of Homeland
Security, speaks at the opening of the School of Intelligence
Studies and Information Sciences
Today, Mercyhurst University announced that the Department of Intelligence Studies would be merged with the Department of Math and Computer Science and the Department of Communications to form the seventh school within the University:  The Tom Ridge School of Intelligence Studies and Information Sciences.

Named after former Pennsylvania governor and first Secretary of Homeland Security, Tom Ridge, the new school takes its place among more traditional schools such as the School of Social Sciences and the School of Business...

(Sounds like a damn press release.   If your readers wanted that, they should go here.  You should give them a feel for what this really means...)

This is a big deal.  A really big deal.

In the first place, there is no other University in the country (perhaps in the world) that has a school dedicated to a vision of Intelligence Studies as an applied discipline, that teaches students how to get intelligence done and not just how to talk about it.

Secondly, it is going to allow us to grow our programs exponentially.  First up is a new and complementary masters degree that will focus on data analytics - so-called "big data". My own hope is that we will soon begin to offer a doctorate - but not a PhD - in Applied Intelligence.  I don't know what the new Dean of the School, Dr. Jim Breckenridge, wants it to look like, but I want it to be a professional doctorate, like an MD or a JD, that will focus not only on intelligence analysis but also on the special challenges of leading and managing the intelligence enterprise.

Third, it validates the vision of Bob Heibel, the founder of the Mercyhurst program.  Twenty-two years ago, long before 911, before even the first World Trade Center bombing in 1993, Bob had the radical idea that academia could do a pretty good job educating the next generation of intelligence analysts.  Almost 1000 students have graduated from our residential, online degree, or certificate programs since then.  These alumni are today employed throughout the national security, business and law enforcement intelligence communities.

Governor Ridge said today that the nation owes a debt of gratitude to Bob for what he has contributed to the safety and security of the US and, through our international students, of the world.  It is a testament to what one person can do when he really believes in something.