Friday, August 16, 2019

How To Think About The Future (Part 2 - What Do You Control?)

Click on the image above to see the full mindmap.

I am writing a series of posts about how to think about the future.  In case you missed Part 1, you can find it here:

How To Think About The Future (Part 1 -- Questions About Questions)

These posts represent my own views and do not represent the official policy or positions of the US Army or the War College, where I currently work.


The great Stoic philosopher Epictetus wrote
"Work, therefore to be able to say to every harsh appearance, 'You are but an appearance, and not absolutely the thing you appear to be.' And then examine it by those rules which you have, and first, and chiefly, by this: whether it concerns the things which are in our own control, or those which are not; and, if it concerns anything not in our control, be prepared to say that it is nothing to you." (Italics mine)
There are good reasons to focus on questions about things you control.  Things you control you can understand or, at least, the data required to understand them is much easier to get.  Things you control you can also change (or change more easily).  Finally, you only get credit for the things you do with the things you control.  Few people get credit for just watching. 

Whole disciplines have been built around improving what you do with what you control.  MBA and Operations Research programs are both good examples of fields of study that focus mostly on improving decisions about how you use the resources under your control.  Indeed, focusing on the things you control is at the center of effectual reasoning, an exciting new take on entrepreneurship and innovation (for example, the entire crowdfunding/startup Quickstarter Project was built on the effectuation principles and are the reason it was as successful as it was).

On the other hand, another great thinker from the ancient world once wrote,
"If you know the enemy and know yourself, you need not fear the result of a hundred battles." Sun Tzu, The Art Of War
Sun Tzu went on to outline the exact impact of not thinking about things you don't control:  
"If you know yourself but not the enemy, for every victory gained you will also suffer a defeat." 
Things outside of your control are much more squishy than things under your control.  The data is often incomplete, and what is there is often unclear.  It is pretty normal for the info to be, as Clausewitz would say, "of doubtful character," and it is rarely structured in nice neat rows with data points helpfully organized with labelled columns.  Finally, in an adversarial environment at least, you have to assume that at least some of the info you do have is deceptive--that it has been put there intentionally by your enemy or competitor to put you off the track.

People frequently run from questions about things that are outside of their control.  The nature of the info available can often make these kinds of questions seem unresolvable, that no amount of thinking can lead to any greater clarity.

This is a mistake.  

Inevitably, in order to move forward with the things you do control, you have to come to some conclusions about the things you do not control.  A country's military looks very different if it expects the enemy to attack by sea vs. by land.  A company's marketing plan looks very different if it thinks its competitor will be first to market with a new type of product or if it will not.  Your negotiating strategy with a potential buyer of your house depends very much on whether you think the market in your area is hot or not.

The US military has a saying:  "Intelligence leads operations."  This is a shorthand way of driving home the point that your understanding of your environment, of what is happening around you, of the things outside of your control, determines what you do with the things under your control.  Whether you do this analysis in a structured, formal way or just go with your gut instinct, you always come to conclusions about your environment, about the things outside your control, before you act.  

Since you are going to do it anyway, wouldn't it be nice if there were some skills and tools you could learn to do it better?  It turns out that there are.  The last 20-30 years has seen an explosion in research about how to better understand the future for those things outside of our control.

More importantly, learning these skills and tools can probably help you understand things under your control better as well.  Things under your control often come with the same kinds of squishy data normally associated with things outside your control.  The opposite is much less likely to be true.  

Much of the rest of this series will focus on these tools and thinking skills, but first, we need to dig more deeply into the nature of the questions we ask about things outside our control and precisely why those questions are so difficult to answer.

(Next:  Why Are Questions About Things Outside Your Control So Difficult?)

Tuesday, July 30, 2019

How To Think About The Future (Part 1 -- Questions About Questions)

We don't think about the future; we worry about it.

Whether it's killer robots or social media or zero-day exploits, we love to rub our preferred, future-infused worry stone between our thumb and finger until it is either a thing of shining beauty or the death of us all (and sometimes both).  

This is not a useful approach.

Worry is the antithesis of thinking.  Worry is all about jumping to the first and usually the worst possible conclusion.  It induces stress.  It narrows your focus.  It shuts down the very faculties you need to think through a problem.  Worry starts with answers; thinking begins with questions.

What Are Your Questions?
“A prudent question is one-half of wisdom.”Francis Bacon
"The art of proposing a question must be held of higher value than solving it.”Georg Cantor
“If you do not know how to ask the right question, you discover nothing.”W. Edwards Deming
Given the importance of questions (and of asking the "right" ones), you would think that there would be more literature on the subject.  In fact, the question of questions is, in my experience, one of the great understudied areas.  A few years ago, Brian Manning and I took a stab at it and only managed to uncover how little we really know about how to think about, create, and evaluate questions.

For purposes of thinking about the future, however, I start with two broad categories to consider:  Speculative questions and meaningful questions.  

There is nothing wrong with a speculative question.  Wondering about the nature of things, musing on the interconnectedness of life, and even just staring off into space for a bit are time-honored ways to come up with new ideas and new answers.  We should question our assumptions, utilize methods like the Nominal Group Technique to leverage the wisdom of our collective conscious, and explore all of the other divergent thinking tools in our mental toolkits.  

Speculation does not come without risks, however.  For example, how many terrorist groups would like to strike inside the US?  Let's say 10.  How are they planning to do it?  Bombs, guns, drones, viruses, nukes?  Let's say we can come up with 10 ways they can attack.  Where will they strike?  One of the ten largest cities in the US?  Do the math--you already have 1000 possible combinations of who, what, and where.

How do we start to narrow this down?  Without some additional thinking strategies, we likely give in to cognitive biases like vividness and recency to narrow our focus.    Other aspects of the way our minds work--like working memory limitations--also get in the way.  Pretty soon, our minds, which like to be fast and certain even when they should be neither, have turned our 1 in 1000 possibility into a nice, shiny, new worry stone for us to fret over (and, of course, share on Facebook).

Meaningful questions are questions that are important to you--important to your plans, to your (or your organization's) success or failure.  Note that there are two criteria here.  First, meaningful questions are important.  Second, they are yours.  The answers to meaningful questions almost, by definition, have consequences.  The answers to these questions tend to compel decisions or, at least, further study.

It is entirely possible, however, to spend a lot of time on questions which are both of dubious relevance to you and are not particularly important.  The Brits have a lovely word for this, bikesheddingIt captures our willingness to argue for hours about what color to paint the bikeshed while ignoring much harder and more consequential questions.  Bikeshedding, in short, allows us to distract ourselves from our speculations and our worries and feel like we are still getting something done.

Next:  What do you control?

Thursday, July 25, 2019

Why The Next "Age of Intelligence" Scares The Bejesus Out Of Me

A little over a month ago, I wrote a post titled How To Teach 2500 Years Of Intelligence History In About An Hour.   The goal of that post was to explain how I taught the history of intelligence to new students. Included in that article was the picture below:

I am not going to cover all the details of the "Ages of Intelligence" approach again (you can see those at this link), but the basic idea is that there are four pretty clear ages.  In addition, I made the case that, driven by ever changing technology as well as corresponding societal changes, the length of these ages is getting logarithmicly shorter. 

Almost as an afterthought, I noted that the trend line formed by these ever shortening ages was approaching the X-intercept.  In other words, the time between "ages" was approaching zero.  In fact, I noted (glibly and mostly for effect) that we could well be in a new "Age of Intelligence" right now and not know it.

When I publish a piece like the one mentioned above, I usually feel good about it for about ten minutes.  After that, I start to think about all the stuff I could have said or where to go next with the topic.  In this case, the next step was obvious--a little speculative thinking about what comes, well, now.  What I saw was not pretty (and, to be frank, a little frightening).

Looking out 10 years, I see five hypotheses (The base rate, therefore, for each is 20%).  I will indicate what I think are the arguments for and against each hypothesis, and then, how I would adjust the probability from the base rate.  

The Age of Anarchy  
No one knows what is going on, no one knows what to do about it.  Technology just keeps changing and improving at an ever increasing pace, and no one person or even one organization (no matter how large) can keep up with it.  Strategic intelligence is worthless and even tactical intelligence has only limited utility.

Arguments for:  This is certainly what life feels like right now for many people.  Dylan Moran's rant probably captures this hypothesis far better than I could:

Arguments against:   This is a form of the same argument that has been made against every technological advance since the Ancient Greeks (Socrates, for example, was against writing because it "will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing..."  Replace "writing with "books" or "computers" or "cell phones" and you have another variation on this Luddite theme).  In short, every age has had to adjust to the risks and rewards new technologies bring.  The next age of intelligence is unlikely to be new in this respect.

Probability:  17%

Age of Irrelevance
Artificial intelligence (AI) takes over the world.  The algorithms get so good at understanding and predicting that we increasingly turn over both our intelligence production and our decisionmaking to the computers.  In this hypothesis, there is still a need to know the enemy there is just no longer a need for us to do all those tedious calculations in our tents.  The collection of intelligence information and the conduct of intelligence analysis becomes an entirely automated process.

Arguments for:  Even a cursory look at the Progress in Artificial Intelligence article in Wikipedia suggests two things.  First, an increasing number of complex activities where humans used to be the best in the world are falling victim to AI's steady march.  Second, humans almost always underestimate just how quickly machines will catch up to them.  Efforts by the growing number of surveillance states will only serve to increase the pace as they move their populations in the direction of the biases inherent in the programming or the data.  

Arguments against:  AI may be the future, but not now and certainly not in the next ten years.  Four polls of researchers done in 2012-13 indicated that that there was only a 50% chance of a technological singularity--where a general AI is as smart as a human--by 2040-2050.  The technology gurus at Gartner also estimated in 2018 that general artificial intelligence is just now beginning to climb the "hype cycle" of emerging technologies and is likely more than 10 years away.  The odds that this hypothesis becomes reality go up after ten years, however.

Probability:  7%

Age of Oligarchy
Zuckerberg, Gates, Nadella, Li, Bezos, Musk, Ma--their names are already household words.  Regular Joe's and Jane's (like you and me) get run over, while these savvy technogeeks rule the world.  If you ain't part of this new Illuminati, you ain't $h!t.  Much like the Age of Concentration, intelligence efforts will increasingly focus on these oligarchs and their businesses while traditional state and power issues take a back seat (See Snow Crash).

Arguments for:  92% of all searches go through Google, 47% of all online sales go through Amazon, 88% of all desktop and laptop computers run Windows.  These and other companies maintain almost monopoly-like positions within their industries.  By definition, the oligarchy already exists.

Arguments against:  Desktops and laptops may run on Windows but the internet and virtually all supercomputers--that is, the future--run on Linux based systems.  Browsers like Brave and extensions like Privacy Badger will also make it more difficult for these companies to profit from their monopoly positions.  In addition, an increasing public awareness of the privacy issues associated with placing so much power in these companies with so little oversight will expand calls for scrutiny and regulation of these businesses and their leaders.

Probability:  27%

Age of Ubiquity
We start to focus on our digital literacy skills.  We figure out how to spot liars and fakes and how to reward honest news and reviews.   We teach this to our children.  We reinforce and support good journalistic ethics and punish those who abandon these standards.  We all get smart.  We all become--have to become--intelligence analysts.

Arguments for:   Millennials and Gen Z are skeptical about the motives of big business and are abandoning traditional social media platforms in record numbers.  They are already digital natives, unafraid of technology and well aware of its risks and rewards.  These generations will either beat the system or disrupt it with new technologies.

Arguments against:   Human nature.  Hundreds of books and articles have been written in the last decade on how powerful the biases and heuristics hardwired into our brains actually are.  We are programmed to seek the easy way out, to value convenience over truth, and to deceive ourselves.  Those who do happen to figure out how to beat the system or disrupt it are likely to hold onto that info for their own economic gain, not disperse it to the masses.

Probability:  12%

Blindside Hypothesis
Something else, radically different than one of approaches above, is going to happen. 

Arguments for:   First, this whole darn article is premised on the idea that the "Ages of Intelligence" approach is legit and not just a clever pedagogical trick.  Furthermore, while there are lots of good, thoughtful sources regarding the future, many of them, as you can see above, contradict.  Beyond that:

  • This is a complex problem, and I generated this analysis on my own with little consultation with other experts.  
  • Complex problems have "predictive horizons"--places beyond which we cannot see--where we are essentially saying, "There is a 50% chance of x happening, plus or minus 50%."
  • I have been thinking about this on and off for a few weeks but have hardly put the massive quantities of time I should to be able to make these kinds of broad assessments with any confidence.  
  • The lightweight pro v. con form of my discussion adds only a soup├žon of structure to my thinking.    
  • Finally, humans have a terrible track record of predicting disruption and I am decidedly human.  
Bottomline:  The odds are good that I am missing something.

Arguments against:  What?  What am I missing?  What reasonable hypothesis about the future, broadly defined, doesn't fall into one of the categories above? (Hint:  Leave your answer in the comments!)

Probability:  37% 

Why This Scares Me
Other than the rather small probability that we all wake up one morning and become the critical information collectors and analysts this most recent age seems to demand of us, there aren't any good outcomes.   I don't really want chaos, computers or a handful of profit-motivated individuals to control my digital and, as a result, non-digital life.  I also fully realize that, in some sense, this is not a new revelation.  Other writers, far more eloquent and informed than I, have been making some variation of this argument for years.  

This time, however, it is more personal.  Intelligence leads operations.  Understanding the world outside your organization's control drives how you use the resources under your control.  My new employer is the US Army and the US Army looks very different in the next ten years depending on which of these hypotheses becomes fact. 

Monday, July 22, 2019

I Made It!

I started my new job as Professor of Strategic Futures at the US Army War College last week.  So far, it has been a fairly predictable, if seemingly unending, series of orientations, mandatory trainings, and security briefings.  I don't mind.  To paraphrase Matthew, "What did I go into the Army to see?  A man running without a PT belt?"

What I have been impressed with is the extraordinary depth of knowledge and genuine collegiality of the faculty.  It is an interesting feeling to be constantly surrounded by world class experts in virtually any domain.

Equally impressive is the emphasis on innovation and experimentation.  I am surrounded by an example of this right now.  I am writing this post on one of a number of open access commercial network machines in the War College library.  In the back of the room, a professor is leading an after action review of an exercise built around Compass Games' South China Sea war game (BTW, if you think it odd that the Army would have students play a scenario which is largely naval in nature, you are missing my point about innovation and experimentation). 

Scattered throughout the rest of the library are recently acquired, odd-shaped pieces of furniture designed to create collaborative spaces, quiet spaces, and resting spaces (among others).  Forms soliciting feedback suggest that the library is working hard to figure out what kind of spaces its patrons want, and what kind of furniture and equipment would best support those needs.  In the very rear of the building, there is a room undergoing a massive reconstruction.  No telling what is about to go in there, but it is clear evidence that the institution is not standing still.  

I will continue to write here on Sources and Methods, of course.  I also hope to get a few things published on the War College's own online journal, The War Room  (Check it out if you haven't.  It's very cool). Other than that, I look forward to pursuing some of my old lines of research and adding a few new ones as well.

For those of you who want to contact me, you can call me in my office at 717-245-4665, email me at kristan dot j dot wheaton dot civ at mail dot mil or, as always, email me at kris dot wheaton at gmail dot com.  You can also message me on LinkedIn.

Monday, June 24, 2019

EPIC 2014: The Best/Worst Forecast Ever Made?

The eight minute film, EPIC 2014, made a huge impact on me when it was released in 2004.  If you have seen it before, it's worth watching it again.  If you haven't, let me set it up for you before you click the play button below.   

Put together by Robin Sloan and Matt Thompson way back in 2004, EPIC 2014 talked about the media landscape in 2014 as if it had already happened.  In other words, they invented a "Museum of Media History", and then pretended, in 2004, to look backward from 2014 as a way of exploring how they thought the media landscape would change from 2004 to 2014.  Watch it now; it will all make sense when you do:

In some ways, this is the worst set of predictions ever made.  Almost none of the point predictions are correct.  Google never merged with Amazon, Microsoft did not buy Friendster, The New York Times did not become a print-only publication for the elderly, and Sony's e-paper is not cheaper than real paper (It costs 700 bucks and gets an average of just 3 stars (on Sony's site!)).

Sloan and Thompson did foresee Google's suite of online software services but did not really anticipate competition from the likes of Facebook, Twitter, LinkedIn, YouTube or any of a host of other social media services that have come to dominate the last 15 years.

None of that seemed particularly important to me, however.  It felt like just a clever way to get my attention (and it worked!).  The important part of the piece was summed up near the end instead.  EPIC, Sloan and Thompson's name for the monopolized media landscape they saw by 2014, is: 
" its best and edited for the savviest readers, a summary of the world—deeper, broader and more nuanced than anything ever available before ... but at its worst, and for too many, EPIC is merely a collection of trivia, much of it untrue, all of it narrow, shallow, and sensational.  But EPIC is what we wanted, it is what we chose, and its commercial success preempted any discussions of media and democracy or journalistic ethics."
Switch out the word "EPIC" with the word "internet" and that still seems to me to be one of the best long-range forecasts I've ever seen.   You could throw that paragraph up on almost any slide describing the state of the media landscape today, and most of the audience would likely agree.  The fact that Sloan and Thompson were able to see it coming way back in 2004 deserves mad props.

It also causes me to wonder about the generalizability of the lessons learned from forecasting studies based on resolvable questions.  Resolvable questions (like "Will Google and Amazon merge by December 31, 2014?") are fairly easy to study (easier, anyway).  Questions which don't resolve to binary, yes/no, answers (like "What will the media landscape look like in 2014?") are much harder to study but also seem to be more important.  

We have learned a lot about forecasting and forecasting ability over the last 15 years by studying how people answer resolvable questions.  That's good.  We haven't done that before and we should have.  

Sloan and Thompson seemed to be doing something else, however.  They weren't just adding up the results of a bunch of resolvable questions to see deeper into the future.  There seems to me to be a different process involved.  I'm not sure how to define it.  I am not even sure how to study it.  I do think, that, until we can, we should be hesitant to over-apply the results of any study to real world analysis and analytic processes.

Tuesday, June 18, 2019


Apollo 11 in Real-Time is the very definition of cool.
HUMINT, SIGINT, OSINT--the specialized language of intelligence is all ate up with acronyms for the various collection disciplines.  Intel wags have (for at least the last 40 years I have been doing this stuff) come up with a variety of clever (?) plays on this formulation.  For example:  RUMINT = Intelligence founded on rumors alone.  DUMBINT = Intelligence too stupid to believe.

COOLINT is usually reserved for something that is, well, cool but might not be particularly relevant to the question at hand.  You want to show COOLINT to other people.  You KNOW they will be interested in it.  It's the clickbait of the intel world.

A great example of COOLINT is the Apollo 11 In Real-time website (the mobile version is OK but you will want to look at it on your PC or MAC.  Trust me).  In fact, I used the hashtag "#COOLINT" when I tweeted out this site this morning.  The guys who put this amazing site together have mashed up all of the audio and video, all of the commentary, and all of the pictures into a single website that allows you to follow along with the mission from T - 1 minute to splashdown.  It doesn't really have anything to do with intelligence, but, to a spacegeek like me, you find the Apollo 11 in Real-time website next to the word "cool" in the dictionary.

I intend to argue here, however, that there is a more formal definition of COOLINT, one that is actually useful in analytic reporting.  To do this, I want to first briefly explore the concepts of "relevant" and "interesting"

One of the hallmarks of good intelligence analysis is that it be relevant to the decisionmaker(s) being supported.  ICD 203 makes this mandatory for all US national security intel analysts but, even without the regulation, relevance has long been the standard in intel tradecraft.

"Interesting" is a term which gets significantly less attention in intel circles.  There is no requirement that good intel be interesting.  It is ridiculous to think that good intel should meet the same standards as a good action movie or even a good documentary.  That said, if I have two pieces of information that convey the same basic, relevant facts and one is "interesting" and other is not (for example, 500 words of statistical text vs. one chart), I would be a bit of a fool not to use the interesting one.  Intel analysts don't just have a responsibility to perform the analysis, they also have a responsibility to communicate it to the decisionmaker they are supporting.  "Interesting" is clearly less important than "relevant" but, in order to communicate the analysis effectively, something that has to be considered.

With all this in mind, it is possible to construct a matrix to help an analyst think about the kinds of information they have available and where it all should go in their analytic reports or briefings:
"Interesting" vs. "Relevant" in analytic reporting
Interesting and relevant information should always be considered for use in a report or brief.  Length or time limits might preclude it, but if it meets both criteria, and particularly if it is a linchpin or a driver of the analysis, this kind of info highly likely belongs in the report.

Relevant information which is not particularly interesting might have to go in the report--it may be too relevant not to include.  However, there are many ways to get this kind of info in the report or brief.  Depending on the info's overall importance to the analysis, it might be possible to include it in a footnote, annex, or backup slide instead of cluttering up the main body of the analysis.

Information that is interesting but not relevant is COOLINT.  It is that neat little historical anecdote that has nothing to do with the problem, or that very cool image that doesn't really explain anything at all.  The temptation to get this stuff into the report or brief is great.  I have seen analysts twist themselves into knots to try to get a particular piece of COOLINT into a briefing or report.  Don't do it.  Put it in a footnote or an annex if you have to, and hope the decisionmaker asks you a question where your answer can start with, "As it so happens..."

Info which is not interesting and not relevant needs to be left out of the report.  I hope this goes without saying.

Three caveats to this way of thinking about info.  First, I have presented this as if the decision is binary--info is either relevant OR irrelevant, interesting OR uninteresting.  That isn't really how it works.  It is probably better to think of these terms as if they were on a scale that weighs both criteria.  It is possible, in other words, to be "kind of interesting" or "really relevant."

The other caveat is that both the terms interesting and relevant should be defined in terms of the decisionmaker and the intelligence requirement.  Relevancy, in other words, is relevancy to the question; "interesting", on the other hand, is about communication.  What is interesting to one decisionmaker might not be to another.

Finally, if you use this at all, use it as a rule of thumb, not as a law.  There are always exceptions to these kinds of models.  

Monday, June 10, 2019

How To Teach 2500 Years Of Intelligence History In About An Hour

Original version of the Art of War by Sun-Tzu
As with most survey courses, Introduction to Intelligence Studies has a ton of information that it needs to cover--all of it an inch deep and mile wide.  One of the most difficult parts of the syllabus to teach, however, is intelligence history.

Whether you start with the Bible or, as I do, with Chapter 13 of The Art Of War, you still have 2500 years of history to cover and typically about an hour long class to do it.  Don't get me wrong.  I think the history of intelligence ought to be at least a full course in any intelligence studies curriculum.  The truth is, though, you just don't have time to do it justice in a typical Intel 101 course.

I was confronted with this exact problem last year.  I had not taught first-year students for years, and when the time came in the syllabus to introduce these students to intel history, I was at a bit of a loss.  Some professors gloss over ancient history and start with the National Security Act of 1947.  Some compress it even more and focus entirely on post Cold War intelligence history.  Others take a more expansive view and select interesting stories from different periods of time to illustrate the general role of intelligence across history.  

All of these approaches are legitimate given the topic and the time constraints.  I wanted, however, to try to make the history of intel a bit more manageable for students new to the discipline.  I hit on an approach that makes sense to me and seemed to work well with the students.  I call it the Four Ages Of Intelligence.

The first age I call the Age of Concentration.  In ancient times, power and knowledge was concentrated in the hands of a relatively small number of people.  The king or queen, their generals, and the small number of officers and courtiers who could read or write were typically both the originators and targets of intelligence efforts.  These efforts, in turn, were often guided by the most senior people in a government.  Sun Tzu noted, "Hence it is that which none in the whole army are more intimate relations to be maintained than with spies."  George Washington, as well, was famous not only as a general but also as a spymaster.  

The Age of Concentration lasted, in my mind, from earliest times to about the early 1800's.  The nature of warfare began to change rapidly after the American and French Revolutions. 
Washington and the capture of the Hessians at Trenton.  
Large citizen armies and significant technological advances (railroads, telegraphs, photography, balloons!) made the process of running spy rings and collating and analyzing the information they collected too large for any one person or even a small group of people to manage.  

Enter the Age of Professionalization.  The 1800's saw the rise of the staff system and the modern civil service to help generals and leaders manage all the things these more modern militaries and governments had to do.  Of course, there had always been courtiers and others to do the king's business but now there was a need for a large number of professionals to deal with the ever-growing complexities of society.  The need for more professionals, in turn, demanded standardized processes that could be taught.  

For me, the Age of Professionalization lasted until the end of World War II when the Age of Institutionalization began.  Governments, particularly the US Government, began to see the need for permanent and relatively large intelligence organizations as a fundamental part of government.   
Logos of the CIA And KGB
Staffs and budgets grew.  Many organizations came (more or less) out of the shadows.  CIA, KGB, MI5 (and 6), ISI, and MSS all became well known abbreviations for intelligence agencies.  The need for intelligence-like collection and analysis of information became obvious in other areas.  Law enforcement agencies, businesses, and even international organizations started to develop "intelligence units" within their organizational structures.  

All of this lasted until about 1994 when, with the advent of the World Wide Web, the Age of Democratization began.   Seven years ago (!), I wrote an article called "Top Five Things Only Spies Used To Do But Everyone Does Now."  I talked about a whole bunch of things, like using sophisticated ciphers to encrypt data and examining detailed satellite photos, that used to be the purview of spies and spies alone.  Since then, it has only gotten worse.  Massive internet based deception operations and the rise of deepfake technology is turning us all into spymasters, weighing and sorting information wheat from information chaff.  Not only the threats but also the opportunities have grown exponentially.   For savvy users, there is also more good information, a greater ability to connect and learn, to understand the things that are critical to their success or failure but are outside their control, than ever before--and to do this on a personal rather than institutional level.

There are a couple of additional teaching points worth making here.  First is the role of information technology in all of this.  As the technology for communicating and coordinating activities has improved, the intelligence task has become more and more complicated.  This, in turn, has required the use of more and more people to manage the process, and that has changed how the process is done.  Other disciplines have been forced to evolve in the face of technological change.  It is no surprise, then, that intelligence is also subject to similar evolutionary pressures.

It is also noteworthy, however, that the various ages of intelligence have tended to become shorter with the near-logarithmic growth in technological capabilities.  In fact, when you map the length of the four ages on a logarithmic scale (see below) and draw a trendline, you can see a pretty good fit.  It also appears that the length of the current age, the Age of Democratization, might be a bit past its sell-by date.  This, of course, begs the question:  What age comes next?  I'm voting for the Age of Anarchy...and I am only half kidding.

Is this a perfect way of thinking about the history of intelligence?  No, of course not.  There are many, many exceptions to these broad patterns that I see.  Still, in a survey class, with limited time to cover the topic, I think focusing on these broad patterns that seemed to dominate makes some sense.  

Friday, May 10, 2019

I Am Leaving Mercyhurst And...

Look, Ma!  An (almost) clean desk!
... joining the US Army War College!

It has been an honor and a privilege to work with the faculty here in the Intelligence Studies Department at Mercyhurst over the last 16 years.  Having the opportunity to help build a world class program is an experience I will never forget.

As important as my colleagues, however, are the extraordinary students I have had the pleasure to teach and work with.  Whether we were sweating in the halls of the old Wayne Street building or livin' large in our fancy, new digs in the Center for Academic Engagement, getting to work with really smart, super dedicated students was probably the best thing about the job.  Watching them continue to grow and succeed as alumni is even more rewarding.  I am convinced that, one day, the DNI will almost certainly be a Mercyhurst alum (Several Directors of Strategic Intelligence for some Fortune 500 companies already are).

As much as I am sorry to leave Mercyhurst, I am very excited about my next position as Professor Of Strategic Futures at the War College.  There are few missions as important as developing strategic leaders and ideas for the US Army and I am proud to be part of the effort.

I expect to be out of my office here by the end of the month, so, if you have any last minute business to attend to, please reach out soon.  After the end of the month, the best way to reach me until I get to Carlisle in July is via gmail (kris dot wheaton at gmail dot com).  Once I have new contact info, I will post it.

I fully expect to continue to publish new thoughts, articles, and anything interesting I run across here on Sources and Methods.  In fact, I expect to be able to write more often.  

Stay tuned!  It's about to get (more) interesting...

Tuesday, March 19, 2019

What's The Relationship Of An Organization's Goals And Resources To The Type Of Intelligence It Needs?

"Don't blame me, blame this!"
I was trying to find some space on the whiteboard in my office and it occurred to me that I really needed to do something with some of these thoughts.

One of the most interesting (to me, at least) had to do with the relationship between an organization's goals and its resources coupled with the notion of tactical, operational and strategic intelligence.

There is probably not an entry level course in intelligence anywhere in the world that does not cover the idea of tactical, operational and strategic intelligence.  Diane Chido and I have argued elsewhere that these three categories should be defined by the resources that an organization risks when making a decision associated with the intel.  In other words, decisions that risk few of an organization's resources are tactical while those that risk many of the organizations's resources are strategic.  Thus, within this context, the nature of the intelligence support should reflect the nature of the decision and the defining characteristic of the decision is the amount of the organization's resources potentially at risk.   

That all seemed well and good, but it seemed to me to be missing something.  Finally (Diane and I wrote our article in 2007, so you can draw your own conclusions...), it hit me!  The model needed to also take into consideration the potential impact on the goals and purposes of the organization.

Here's the handy chart that (hopefully) explains what I mean:

What I realized is that the model that Diane and I had proposed had an assumption embedded in it.  In short, we were assuming that the decisionmaker would understand the relationship between their eventual decision, the resources of the organization, and the impact the decision would have on the organization's goals.  

While there are good reasons to make this assumption (decisionmakers are supposed to make these kinds of calculations, not intel), it is clearly not always the case.  Furthermore, adding this extra bit of nuance to the model makes it more complete.

Let's take a look at some examples.  If the impact on resources of deciding to pursue a particular course of action is low but the pay-off is high, that's a no-brainer (Example:  You don't need the DIRNSA to tell you to have a hard-to-crack password).  Of course you are going to try it!  Even if you fail, it will have cost you little.  Likewise, if the impact on resources is high and the impact on goals is low, then doing whatever it is you are about to do is likely stupid (Example:  Pretty much the whole damn Franklin-Nashville Campaign).

While many of these elements may only be obvious after the fact, to the extent that these kinds of things are observable before the decision is made, reflecting on them may well help both intelligence professionals and decisionmakers understand what is needed of them when confronted by a particular problem.  

Tuesday, February 12, 2019

How To Write A Mindnumbingly Dogmatic (But Surprisingly Effective) Estimate (All 3 Parts)

At the top end of the analytic art sits the estimate.  While it is often useful to describe, explain, classify or even discuss a topic, what, as Sun Tzu would say, "enables the wise sovereign and the good general to strike and conquer, and achieve things beyond the reach of ordinary men, is foreknowledge."  Knowing what is likely (or unlikely) to happen is much more useful when creating a plan than only knowing what is happening.

Estimates are like pizza, though.  There are many different ways to make them and many of those ways are good.  However, with our young analysts, just starting out in the Mercyhurst program, we try to teach them one good, solid, never fail way to write an estimate.  You can sort of think of it as the pepperoni pizza of estimates.

Here's the formula:

  • Good WEP +
  • Nuance +
  • Due to's +
  • Despite's +
  • Statement of AC = 
  • Good estimate!
I'm going to spend the rest of this article breaking this down.  

Outline of this article (Click on link to see full map)

Good (Best!) WEPs

Let's start with what makes a good Word of Estimative Probability - a WEP.   Note:  Linguistic experts call these Verbal Probability Expressions and if you want to dive into the literature - and there's a lot - you should use this phrase to search for it.  

WEPs should first be distinguished from words of certainty.  Words of certainty, such as "will" and "won't" typically don't belong in intelligence estimates.  These words presume that the analyst has seen the future and can speak with absolute conviction about it.  Until the aliens get back with the crystal balls they promised us after Roswell, it's best if analysts avoid words of certainty in their estimates.

Notice I also said "good" WEPs, though.  A good WEP is one that effectively communicates a range of probabilities and a bad WEP is one that doesn't.  Examples?  Sure!  Bad WEPs are easy to spot:  "Possibly", "could", and "might" are all bad WEPs.  They communicate ranges of probability so broad that they are useless in decisionmaking.  They usually only serve to add uncertainty rather than reduce it in the minds of decisionmakers.  You can test this yourself.  Construct an estimate using "possible" such as "It is possible that Turkey will invade Iraq this year."  Then ask people to rank the likelihood of this statement on a scale of 1-100.  Ask enough people and you will get everything from 1 TO 100.  This is a bad WEP.

Good WEPs are generally interpreted by listeners to refer to a bounded range of probabilities.  Take the WEP "remote" for example.  If I said "There is a remote chance that Turkey will invade Iraq this year" we might argue if that means there is a 5% chance or a 10% chance but no one would argue that this means that there is a 90% chance of such an invasion.

The Kesselman List
Can we kick this whole WEP thing up a notch?  Yes, we can.  It turns out that there are not only "good" WEPs but there are "best" WEPs.  That is, there are some good WEPs that communicate ranges of probabilities better than others.  Here at Mercyhurst, we use the Kesselman List (see above).  Alumna Rachel Kesselman wrote her thesis on this topic a million years ago (approx.).  She read all of the literature then available and came up with a list of words, based on that literature, that were most well defined (i.e. had the tightest range of probabilities).  The US National Security Community has its own list but we like Rachel's better.  I have written about this elsewhere and you can even read Rachel's thesis and judge for yourself.  We think the Kesselman List has better evidence to support it.  That's why we use it.  We're just that way.

Before I finish, let me say a word about numbers.  It is entirely reasonable and, in fact, may well be preferable, to use numbers to communicate a range of probabilities rather than words.  In some respects this is just another way to make pizza, particularly when compared to using a list where words are explicitly tied to a numerical range of probabilities.  Why then, do I consider it the current best practice to use words?  There are four reasons:

  • Tradition.  This is the way the US National Security Community does it.  While we don't ignore theory, the Mercyhurst program is an applied program.  It seems to make sense, then, to start here but to teach the alternatives as well.  That is what we do.  
  • Anchoring bias.  Numbers have a powerful place in our minds.  As soon as you start linking notoriously squishy intelligence estimates to numbers you run the risk of triggering this bias.  Of course, using notoriously squishy words (like "possible") runs the risk of no one really knowing what you mean.  Again, a rational middle ground seems to lie in a structured list of words clearly associated with numerical ranges.
  • Cost of increasing accuracy vs the benefit of increasing accuracy.  How long would you be willing to listen to two smart analysts argue over whether something had an 81% or an 83% chance of happening?  Imagine that the issue under discussion is really important to you.  How long?  What if it were 79% vs 83%?  57% vs 83%?  35% vs 83%?  It probably depends on what "really important" means to you and how much time you have.  The truth is, though, that wringing that last little bit of uncertainty out of an issue is what typically costs the most and it is entirely possible that the cost of doing so vastly exceeds the potential benefit.  This is particularly true in intelligence questions where the margin of error is likely large and, to the extent that the answers depend on the intentions of the actors,  fundamentally irreducible.  
  • Buy-in.  Using words, even well defined words, is what is known as a "coarse grading" system.  We are surrounded with these systems.  Our traditional, A, B, C, D, F grading system used by most US schools is a coarse grading system as is our use of pass/fail on things like the driver's license test.  I have just begun to dig into the literature on coarse grading but one of the more interesting things I have found is that it seems to encourage buy-in.  We may not be able to agree on whether it is 81% or 83% as in the previous example, but we can both agree it is "highly likely" and move on.  This seems particularly important in the context of intelligence as a decision-support activity where the entire team (not just the analysts) have to take some form of action based on the estimate.  

WEPs are important but they clearly aren't the only thing.  What adds value to an estimate is its level of nuance.

Let me give you an example of what I mean:  
  • The GDP of Yougaria is likely to grow.
  • The GDP of Yougaria is likely to grow by 3-4% over the next 12 months.
Both of these are estimates and both of these use good WEPs but one is obviously better than the other.  Why?  Nuance.

Mercyhurst Alum Mike Lyden made a stab at defining what we mean by "nuance" in his 2007 thesis, The Efficacy of Accelerated Analysis in Strategic Level Intelligence Estimates.  There he defined it as how many of the basic journalistic questions (Who, What, When, Why, Where, and How) the estimate addressed.  

For example, Mike would likely give the first estimate above a nuance score of 1.  It really only answers the "What" question.  I think he would give the second estimate a 3 as it appears to answer not only the "What" question but also the "When" and "How (or how much)" questions as well.  Its not a perfect system but it makes the point.

In general, I think it is obvious that more nuance is better than less.  A more nuanced estimate is more likely to be useful and it is less likely to be misinterpreted.  There are some issues that crop up and need to be addressed, however - nuances to the nuance rule, if you will.
  • What if I don't have the evidence to support a more nuanced estimate?  Look at the second estimate above.  What if you had information to support a growing economy but not enough information (or too much uncertainty in the information you did have) to make an estimate regarding the size and time frame for that growth?  I get it.  You wouldn't feel comfortable putting numbers and dates to this growth.  What would you feel comfortable with?  Would you be more comfortable with an adverb ("grow moderately")?  Would you be more comfortable with a date range ("over the next 6 to 18 months")?  Is there a way to add more nuance in any form with which you can still be comfortable as an analyst?  The cardinal rule here is to not add anything that you can't support with facts and analysis - that you are not willing to personally stand behind.  If, in the end, all you are comfortable with is "The economy is likely to grow" then say that.  I think, however, if you ponder it for a while, you may be able to come up with another formulation that addresses the decisionmaker's need for nuance and your need to be comfortable with your analysis.
  • What if the requirement does not demand a nuanced estimate?  What if all the decisionmaker needed to know was whether the economy of Yougaria was likely to grow?  He/She doesn't need to know any more to make his/her decision.  In fact, spending time and effort to add nuance would actually be counterproductive.  In this case, there is no need to add nuance.  Answer the question and move on.  That said, my experience suggests that this condition is rather more rare than not.  Even when DMs say they just need a "simple" answer, they often actually needs something, well, more nuanced.  Whether this is the case or not is something that should be worked out in the requirements process.  
  • What if all this nuance makes my estimate sound clunky?  So, yeah.  An estimate with six clauses in it is going to be technically accurate and very nuanced but sound as clunky and awkward as a sentence can sound.  Well-written estimates fall at the intersection of good estimative practice and good grammar.  You can't sacrifice either, which is why they can be very hard to craft.  The solution is, of course, to either refine your single estimative sentence or to break up the estimative sentence into several sentences.  In my next post on this, where I will talk about "due to's and "despite's", I will give you a little analytic sleight of hand that can help you with this problem.

Due to's And Despite's

Consider again the estimate from above:  "The GDP of Yougaria is likely to grow 3-4% over the next 12 months."  Why?  Why do you, the analyst, think this is the case?  

Typically, there are a few key facts and some accompanying logic that are acting as drivers of these kinds of estimates.  It may have something to do with trade, for example, or with new economic opportunities opening up in the country.  It may be more about where the country is in the business cycle than anything else.  For whatever reason, these are the critical facts and logic that underpin your entire estimate.  If you are wrong about these drivers, because of incomplete collection, poor analysis or deliberate deception, your estimate is likely wrong as well.

I call these factors "due to's" because you can easily see them as a "due to" clause added to the estimate:  
"Due to a substantial increase in trade and the discovery of significant oil reserves in the northern part of the country, the GDP of Yougaria will likely increase 3-4% over the next 12 months."
If "due to's" are driving your faith in your estimate, "despite" clauses are the ones undermining it.  In any non-trivial exercise in estimation there are likely many facts which undermine your estimate.  In the example above, yes, there was an uptick in trade and the oil reserves are great but what about the slight increase in unemployment last month?  Or the reduction in consumer confidence?  

Much more than mere procatalepsis (gosh, I love that word...), the true intent behind the "despite" clause is to be intellectually honest with the decisionmaker you are supporting as an intelligence professional.  In short, you are saying two things to that DM.  First, "I recognize that not all of the facts available support my estimate"  and, second, "despite this, I still believe my estimate is accurate."  

How might that play itself out in our example?  
Despite recent increases in unemployment, the GDP of Yougaria is likely to grow 3-4% over the next 12 months.  Increases in trade have been strong and the recently discovered oil reserves in the northern part of the country will likely drive significant growth over the next year.
Due to a substantial increase in trade and the discovery of significant oil reserves in the northern part of the country, the GDP of Yougaria will likely increase 3-4% over the next 12 months.  While unemployment recently ticked upward, this is likely due to seasonal factors and is only temporary.
These are just examples, of course, and the actual formulation depends on the facts at hand.  The goal remains the same in all cases - here's what I think, here's why, here's why not and here's why the "why nots" don't matter. 

Analytic Confidence

If the estimate is what the analyst thinks is likely or unlikely to happen then analytic confidence can most easily be thought of as the odds that the analyst is wrong.  Imagine two analysts in two different parts of the world have been asked to assess Yougaria's economy for the next year.  One is a beginner with no real experience or contacts in Yougaria.  His sources are weak and he is under considerable time pressure to produce.  The other analyst, operating wholly independently of the first, is a trained economist with many years experience with Yougaria.  His sources are excellent and he has a proven track record of estimating Yougaria's economic performance.  

Now, imagine that both of them just so happen to come to the exact same estimative conclusion - Yougaria's GDP is likely to grow 3-4% over the next 12 months.  Both report their estimative conclusions to their respective decisionmakers.  

It is not too difficult to see that the decisionmaker of the first analyst might be justifiably hesitant to commit significant resources based on this estimate of Yougaria's economic performance.  Absent additional analysis, it is quite obvious that there are a number of good reasons why the analyst in this case might be wrong.  

The decisionmaker supported by the second analyst is in exactly the opposite position.  Here there are very good reasons to trust the analyst's estimate and to commit to courses of action that are premised on its accuracy.  In the first case we could say that the analytic confidence is low while in the second case we could say it is high.  

What are the factors that suggest whether an analyst is more likely to be right or wrong?  Some of the earliest research on this was done by a Mercyhurst alum, Josh Peterson.  In his thesis he went out and looked for research based reasons why a particular analyst is more likely to be right or wrong.  He managed to identify seven reasons:
  • How good are your sources?
  • How well do your independent sources corroborate each other?
  • Are you a subject matter expert?  (This is less important than you might think, however.)
  • Did you collaborate with other analysts and exactly how did you do that? (Some methods are counterproductive.) 
  • Did you structure your thinking in a way proven to improve your forecasting accuracy? (A number of commonly taught techniques don't work particularly well BTW.)
  • How complex did you perceive the task to be?
  • How much time pressure were you under?
Josh would be the first person to tell you the flaws in his research.  For a start, he doesn't know if this list is complete nor does he know how much weight each factor should receive.  In general, then, there is a lot more research to be done on the concept of analytic confidence.  That said, we do know some things and it would be intellectually dishonest not to give decisionmakers some sense of our level of confidence when we make our estimates.

What does this look like in practice?  Well, I tend to think the best we can do right now is to divide the concept of confidence into three levels.  Humans are usually pretty good at intuitively spotting the very best or the very worst but not so good with rank ordering things in the middle.  I teach students that this means that the most common assessment of analytic confidence is likely moderate with high and low reserved for those situations where the seven factors are either largely present or largely absent.  

What then would our Yougarian estimate look like with analytic confidence added to the mix?
Due to a substantial increase in trade and the discovery of significant oil reserves in the northern part of the country, the GDP of Yougaria will likely increase 3-4% over the next 12 months.  While unemployment recently ticked upward, this is likely due to seasonal factors and is only temporary. 
Analytic confidence in this estimate is moderate.  The analyst had adequate time and the task was not particularly complex.  However, the reliability of the sources available on this topic was average with no high quality sources available for the estimate.  The sources available did tend to corroborate each other however, and analyst collaboration was very strong.
Final Thoughts

This is not not the only way to write an effective estimate.  There are other formulations that likely offer equal or even greater clarity.  There is clearly a need for additional research in virtually all of the elements outlined here.  There is also room for more creative solutions that convey the degree of  uncertainty with more precision, encourage analyst buy-in, and communicate all of that more effectively to the decisionmakers supported.

The overly dogmatic "formula" discussed here is, however, a place to start.   Particularly useful with entry-level analysts who may be unused to the rigor necessary in intelligence analysis, this approach helps them create "good enough" analysis in a relatively short time while providing a sound basis for more advanced formulations.