Monday, November 18, 2019

Chapter 2: In Which The Brilliant Hypothesis Is Confounded By Damnable Data

"Stop it, Barsdale!  You're introducing confounds into my experiment!"
A little over a month ago, I wrote a post that asked if the form of an estimative statement mattered in terms of communicating its content with regard to analytic confidence.  Specifically, I asked people to determine which of the following was "more clear" in response to the question, "Do you think the Patriots will win this week?":
  • "It's a low confidence estimate, but the Patriots are very likely to win this week."
  • "The Patriots are very likely to win this week.  This is a low confidence estimate, however."
I posted this as an informal survey and 72 people kindly took the time to take it.  Here are the results:



At first glance, the results appear to be less than robust.  The difference measured here is unlikely to be statistically significant.  Even if it is, the effect size does not appear to be that large.  The one thing that seems clear is that there is no clear preference.

Or is there?


Just like every PHD candidate who ever got disappointing results from an experiment, I have spent the last several weeks trying to rationalize the results away--to find some damn lipstick and get it on this pig!


I think I finally found something which soothes my aching ego a bit.  The fundamental assumption of these kinds of survey questions is that, in theory, both answers are equally likely.  Indeed, this sort of A/B testing is done precisely because the asker does not know which one the client/customer/etc. will prefer.

This assumption might not hold in this case.  Statements of analytic confidence are, in my experience, rare in any kind of estimative work (although they have become a bit more common in recent years).  When they are included, however, they are almost always included at the end of the estimate.  Indeed, one of those who took the survey (and preferred the first statement above) commented that putting the statement of analytic confidence at the end, "is actually how it would be presented in most IC agencies, but whipsaws the reader."

How might the comfort of this familiarity change the results?  On the one hand, I have no knowledge of who took my survey (though most of my readers seem to be at least acquainted in passing with intelligence and estimates).  On the other hand, there is some pretty good evidence (and some common sense thinking) that documents the power of the familiarity heuristic, or our preference for the familiar over the unfamiliar.  In experiments, the kind of thing that can throw your results off is known as a confound.

More important than familiarity with where the statement of analytic confidence traditionally goes in an estimate, however, might be another rule of estimative writing and another confound:  BLUF.

Bottomline Up Front (or BLUF) style writing is a staple of virtually every course on estimative or analytic writing.  "Answer the question and answer it in the first sentence" is something that is drummed into most analysts' heads from birth (or shortly thereafter).  Indeed, the single most common type of comment from those that preferred the version with the statement of analytic confidence at the end was, as this one survey taker said, "You asked about the Patriots winning - the...response mentions the Patriots - the topic - within the first few words."
Note:  Ellipses seem important these days and the ones in the sentence above mark where I took out the word "first."  I randomized the two statements in the survey so that they did not always come up in the same order.  Thus, this particular responder saw the second statement above (the one with the statement of analytic confidence at the end) first.
If the base rate of the two answers is not 50-50 but rather 40-60 (or worse in favor of the more familiar, more BLUFy answer) then these results could easily become very significant.  It would be like winning a football game you were expected to lose by 35 points!

Thus, like all good dissertations, the only real conclusion I have come to is that the "topic needs more study."

Joking aside, it is an important topic.  As you likely know, it is not enough to just make an estimate.  It is also important to include a statement of analytic confidence.  To do anything less in formal estimates is to be intellectually dishonest to whoever is making real decisions based on your analysis.  I don't think that anyone would disagree that form can have a significant impact on how the content is received.  The real questions are how does form impact content and to what degree?  Getting at those questions in the all important area of formal estimative writing is truly something well-worth additional study.

Tuesday, October 1, 2019

Analytic Confidence And The New England Patriots: A Hypothesis

"Don't try to stop me!  I'm having a thought!"  (Image Source)
I was driving to work this morning, thinking about analytic confidence (as one does), and I had a thought.  An interesting thought.  Before I tell you what it was, you need to take the one question survey at the link below to see if my thought has any merit (I will post the results as a follow-up to this post):

Which statement seems more clear to you?

Did you take the survey?  If not, go back and take it!

And now?  

OK!  Thanks!

People are often confused by the difference between an estimate and confidence in that estimate.  This confusion is driven, to a very large part, by the way the terms are often (mis)used in formal analytic writing.  It is not uncommon to see someone talk about their confidence when they are really making an estimate or, less commonly, to use estimative language to convey confidence.  

The two concepts, however, are very different.  The estimate communicates what you think is likely (or unlikely) to happen in the future.  Confidence speaks to the likelihood that something is mucking up the process used to establish that estimate.  

This is where the New England Patriots come in.  For example, I think it is very likely that the New England Patriots will win their next game (Note:  I am using the term "very likely" here the same way the DNI does).  I watch football but am by no means an expert.  I don't even know who the Patriots are playing next week.  I just know that they are usually a good team, and that they usually win a lot of games.  So, while I still think it is very likely that the Patriots will win, my confidence in that estimate is low.  The process I used for deriving that estimate was so weak, I won't be surprised to find out that they have a bye next week.

On the other hand, it is easy to imagine a forecaster who is steeped in football lore.  This hypothetical forecaster has an excellent track record derived in large part from a highly structured and efficient process for determining the odds of a victory.  This forecaster might say exactly the same thing I did--the Patriots are very likely to win their next game--but, because of a superior process, this forecaster has high confidence in their estimate.

It is important to convey both--the estimate itself and analytic confidence--when communicating the results of analysis to a decisionmaker.  To do anything less runs the risk of the decisionmaker misinterpreting the findings or assuming things about the process that are not true.  

It is also important to note that the "analytic confidence" mentioned here differs significantly from the far more commonly discussed notion of psychological confidence.  Psychological confidence is a statement about how one "feels" and can often be caused by cognitive bias or environmental factors.  There is no reliable relationship between forecasting accuracy and psychological confidence.  

Analytic confidence, on the other hand, is based on legitimate reasons why the analysis is more likely to be correct.  For example, analysis derived from facts presented by reliable sources is more likely to be correct than analysis derived from sketchy or disreputable sources.  In fact, there are a number of legitimate reasons for more rather than less analytic confidence (you can read about them here).

It is, of course, possible, for analytic and psychological notions of confidence to be consistent, at least in the context of an individual forecast.  I, for example, "feel" that I have no reason to be confident in my estimate about the Patriots.  I also know, as I go down the list of elements responsible for legitimate analytic confidence, that very few are present.  Low applies to both my psychological and analytic variants of confidence, in this case.

That is not normal.  Overconfidence bias is typically the cause of feelings of confidence outpacing a more rational assessment of the quality of the analytic process.  Underconfidence, on the other hand, is typically caused by over-thinking a problem and is more common among experts than you might think.

Now to my thought.  Finally.

One of the big problems with analytic confidence is communicating it to decisionmakers in an intuitive way.  Part of this problem occurs, no doubt, because of the different meanings the word "confidence" can have.  Most people, when they hear the word "confidence" used in casual conversation, assume you mean the psychological kind.  Adding the word "analytic" in front of "confidence" doesn't seem to help much, as most people don't really have a notion of what analytic confidence is or how it differs from the more commonly used, psychological type of confidence (They don't want to know, either.  They have enough to remember).

The classic solution has been to ignore analytic confidence completely.  This is wrong for all the reasons discussed above.  Occasionally, however, analysts elect to include a statement of analytic confidence, typically at the end of the analysis.  Part of this is due to the "Bottomline Up Front (BLUF)" style of writing that is common to analysis.  The logic here is that the most important thing is the estimate.  That becomes the bottomline and, therefore, the first thing mentioned in the paper or briefing.

What if we flip that on its head?  What if we go, at least in casual conversation, with the analytic confidence first?  

Thus you had my two formulations:
  • "It's a low confidence estimate, but the Patriots are very likely to win this week."
  • "The Patriots are very likely to win this week.  This is a low confidence estimate, however."
These two statements say exactly the same thing in terms of content.  However, I think the form of the first statement better communicates what the analyst actually intends.  In other words, I think the first statement establishes a slightly different context.  Furthermore, I think this context will likely help the listener interpret my use of the word "confidence" correctly.  That is, the first statement is better than the second at suggesting that I am using confidence as a way to highlight the process I used to derive the estimate and not just how I feel about it.  

Another reason I think the second statement is inferior is because I think it sounds confusing to the casual listener.  It is theoretically better (the bottomline is definitely up front) but, unless you are steeped in the arcana of analytic writing, it cannot be easily interpreted and could lead to confusion.

That's the reason for the quick poll.  I just wanted to see what you thought--to see, in the words of Gertrude Stein, if there was any there there.  

Thanks and I will post what I found (and my inevitably shocked reaction to it) in a later post.

Monday, September 9, 2019

What Is A "Gray Rhino" And How Do I Tackle One? (+ That Time I Died For 7 Seconds)

A perfectly ordinary gray rhino. 
You still wouldn't want to be surprised by it
.
By Krish Dulal - Own work, CC BY-SA 3.0,
https://commons.wikimedia.org/w/index.php?curid=12888627
I am taking a break today from my series on How To Think About The Future to talk about a new term I just heard:  The Gray Rhino.

A Gray Rhino is basically the opposite of a Black Swan.  It is a high impact, high probability event that not enough people are paying attention to.  

A good example of this may be the recent advances in the biological sciences.  When I began my current job, I asked 20 of the best thinkers I know, "What is the most under-hyped, under-rated technology or trend?"  I wanted to understand what I might be missing, what I should be examining more carefully.

I was surprised at the number of people who came back and said, in one form or another, "Biology."  Whether it is the prospects (and horrors) of gene editing, immunotherapiesmycorrhizal networks, bacterial manipulation, our understanding of the brain, or our ability to create whole new brains from scratch (!), advances in the biological sciences do seem poised to revolutionize our lives, yet it does not seem to get as much attention as other trends like artificial intelligence.  This is a Gray Rhino.  Something that is almost certain to happen, will have a massive impact when it does, but is not getting the attention it deserves.

Not everything is either a Black Swan or a Gray Rhino, however.  A good example may be the hurricane, Dorian, that recently leveled the Bahamas before causing all sorts of havoc up the east coast of the US.  The forecasting models did a good job of estimating where the hurricane would go, and when it would get there.  Likewise, the sheer size of the thing communicated just how devastating it was going to be.  While there are always people who cannot afford to leave the path of a hurricane (or have nowhere to go) or those foolish few who choose to ride it out for the hell of it, most people gave the storm the attention it deserved and did what they could to take appropriate precautions.

As I think about the problem of how to deal with true Gray Rhinos, though, it seems to me that this is not primarily a problem of collection or analysis.  Researchers have enough info in these situations, and they understand it well enough, at least, to raise the issue(s).

It appears to me to be, instead, a problem of production or, more accurately, communication.  Specifically, I think it is related to the Confidence Heuristic.  A heuristic is a fancy word for a rule of thumb but a rule of thumb with a slight difference.  A rule of thumb is often learned (see the video below for an example). 



A heuristic, on the other hand, has developed over evolutionary time scales and is hardwired into the architecture of the brain.  The Confidence Heuristic says that, all other things being equal, we tend to accept the logic/reasoning/forecasts of other people who are confident in their logic/reasoning/forecasts.  We are biologically predisposed to believe those who are confident in their own beliefs.  What is more important is that studies have shown that this is not necessarily a bad rule.  People who are genuinely confident are often right.  

For example, I remember the afternoon I died for seven seconds (It was less dramatic than that sounded...).  Fortunately, I was in one of the best possible places to die for a brief period of time--a hospital.  I had suffered several dizzy spells the day before and had been admitted for observation and had been hooked up to a portable EKG.  When my heart stopped due to sick sinus syndrome, the docs were able to see exactly what had happened.  Shortly after I came around, a cardio surgeon (who I had never met) walked in with the readout, showed it to me, and said, "This buys you a pacemaker."

As they wheeled me to OR, I remember asking the doctor, "How many of these have you done?"  She said, with absolute confidence, "Hundreds," and then she looked me dead in the eye and told me, "This is a piece of cake."

Her confidence in her skills was infectious.  I believed her, and because I did, I went into surgery with no worries and came out of it successfully.  She was correct to be confident as well.  She had, in fact, done hundreds of these surgeries, and for the last five years, this little piece of biotech (with its eight year battery!) has kept me alive without any real issues.  

Politicians, TV hucksters, and other con artists, on the other hand, may not know about the Confidence Heuristic but they sure know how to use it!  Speaking confidently and in absolute rather than nuanced terms is the hallmark of almost every political speech and all of the hours of editorial commentary masquerading as news shows.  Nuance is used to cast doubt on the other side's position while confidence is required to promote your own position.  
(Note:  This, coupled with Confirmation Bias and the Dunning-Kruger Effect, explains much of the internet.)
In other words, Gray Rhinos likely exists because of the way Gray Rhino communities of interest choose to talk about Gray Rhinos.  Measured tones, nuanced forecasts, and managed expectations are the language of science and (much of) academia.  Hyperbole, bold predictions, and showmanship generate the buzz, however.  

What to do if you find yourself working on a Gray Rhino problem?  Hiring a frontman to hype your rhino is likely excessive and can get you into real trouble (See Theranos and MIT Media Lab for a few cautionary tales).  That said, developing a relationship with the press, being able to explain your research in layman's terms, and celebrating the genuine "wins" in your field as they come along, seems to make sense.

Finally, if you do decide to go the frontman route (and remember, I don't recommend it), at least get a guy like this:

Monday, August 26, 2019

How To Think About The Future (Part 3--Why Are Questions About Things Outside Your Control So Difficult?)

I am writing a series of posts about how to think about the future.  In case you missed the first two parts, you can find them here:

Part 1--Questions About Questions
Part 2--What Do You Control

These posts represent my own views and do not represent the official policy or positions of the US Army or the War College, where I currently work.

*******************

Former Director of the CIA, Mike Hayden, likes to tell this story:

"Some months ago, I met with a small group of investment bankers and one of them asked me, 'On a scale of 1 to 10, how good is our intelligence today?'" recalled Hayden. "I said the first thing to understand is that anything above 7 isn't on our scale. If we're at 8, 9, or 10, we're not in the realm of intelligence—no one is asking us the questions that can yield such confidence. We only get the hard sliders on the corner of the plate. Our profession deals with subjects that are inherently ambiguous, and often deliberately hidden. Even when we're at the top of our game, we can offer policymakers insight, we can provide context, and we can give them a clearer picture of the issue at hand, but we cannot claim certainty for our judgments." (Italics mine)
I think it is important to note that the main reason Director Hayden cited for the Agency's "batting average" was not politics or funding or even a hostile operating environment.  No.  The #1 reason was the difficulty of the questions. 

Understanding why some questions are more difficult than others is incredibly important.  Difficult questions typically demand more resources--and have more consequences.  What makes it particularly interesting is that we all have an innate sense of when a question is difficult and when it is not, but we don't really understand why.  I have written about this elsewhere (here and here and here, for example), and may have become a bit like the man in the  "What makes soup, soup?" video below...




No one, however, to my knowledge, has solved the problem of reliably categorizing questions by difficulty.

I have a hypothesis, however.

I think that the AI guys might have taken a big step towards cracking the code.  When I first heard about how AI researchers categorize AI tasks by difficulty, I thought there might be some useful thinking there.  That was way back in 2011, though.  As I went looking for updates for this series of posts, I got really excited.  There has been a ton of good work done in this area (no surprise there), and I think that Russel and Norvig in their book, Artificial Intelligence:  A Modern Approach, may have gotten even closer to what is, essentially, a working definition of question difficulty.

Let me be clear here.  The AI community did not set out to figure out why some questions are more difficult than others.  They were looking to categorize AI tasks by difficulty.  My sense, however, is that, in so doing, they have inadvertently shown a light on the more general question of question difficulty.  Here is the list of eight criteria they use to categorize task environments (the interpretation of their thinking in terms of questions is mine):
  • Fully observable vs. partially observable -- Questions about things that are hidden (or partially hidden) are more difficult than questions about things that are not.
  • Single agent vs. multi-agent -- Questions about things involving multiple people or organizations are more difficult than questions about a single person or organization.
  • Competitive vs. cooperative -- If someone is trying to stop you from getting an answer or is going to take the time to try to lead you to the wrong answer, it is a more difficult question.  Questions about enemies are inherently harder to answer than questions about allies.
  • Deterministic vs. stochastic -- Is it a question about something with fairly well-defined rules (like many engineering questions) or is it a question with a large degree of uncertainty in it (like questions about the feelings of a particular audience)?  How much randomness is in the environment?
  • Episodic vs. sequential -- Questions about things that happen over time are more difficult than questions about things that happen once.
  • Static vs. dynamic -- It is easier to answer questions about places where nothing moves than it is to answer questions about places where everything is moving.
  • Discrete vs. continuous -- Spaces that have boundaries, even notional or technical ones, make for easier questions than unbounded, "open world," spaces.
  • Known vs. unknown -- Questions where you don't know how anything works are much more difficult than questions where you have a pretty good sense of how things work.  
Why is this important to questions about the future?  Two reasons.  First, it is worth noting that most questions about the future, particularly those about things that are outside our control, fall at the harder rather than easier end of each of these criteria.  Second, understanding the specific reasons why these questions are hard also gives clues as to how to make them easier to answer.  

There is one more important reason why questions can be difficult.  It doesn't come from AI research.  It comes from the person (or organization) asking the question.  All too often, people either don't ask the "real" question they want answered or are incredibly unclear in the way they phrase their questions.  If you want some solutions to these problems, I suggest you look here, here and here.  

I was a big kid who grew up in a small town.  I only played Little League ball one year, but I had a .700 batting average.  Even when I was at my best physical condition as an adult, however, I doubt that I could hit a foul tip off a major league pitcher.  Hayden is right.  Meaningful questions about things outside your control are Major League questions, hard sliders on the corner of the plate.  Understanding that, and understanding what makes these questions so challenging, is a necessary precondition to taking the next step--answering them.

Next:  How Should We Think About Answers?  

Friday, August 16, 2019

How To Think About The Future (Part 2 - What Do You Control?)

Click on the image above to see the full mindmap.

I am writing a series of posts about how to think about the future.  In case you missed Part 1, you can find it here:

How To Think About The Future (Part 1 -- Questions About Questions)

These posts represent my own views and do not represent the official policy or positions of the US Army or the War College, where I currently work.

****************

The great Stoic philosopher Epictetus wrote
"Work, therefore to be able to say to every harsh appearance, 'You are but an appearance, and not absolutely the thing you appear to be.' And then examine it by those rules which you have, and first, and chiefly, by this: whether it concerns the things which are in our own control, or those which are not; and, if it concerns anything not in our control, be prepared to say that it is nothing to you." (Italics mine)
There are good reasons to focus on questions about things you control.  Things you control you can understand or, at least, the data required to understand them is much easier to get.  Things you control you can also change (or change more easily).  Finally, you only get credit for the things you do with the things you control.  Few people get credit for just watching. 

Whole disciplines have been built around improving what you do with what you control.  MBA and Operations Research programs are both good examples of fields of study that focus mostly on improving decisions about how you use the resources under your control.  Indeed, focusing on the things you control is at the center of effectual reasoning, an exciting new take on entrepreneurship and innovation (for example, the entire crowdfunding/startup Quickstarter Project was built on the effectuation principles and are the reason it was as successful as it was).

On the other hand, another great thinker from the ancient world once wrote,
"If you know the enemy and know yourself, you need not fear the result of a hundred battles." Sun Tzu, The Art Of War
Sun Tzu went on to outline the exact impact of not thinking about things you don't control:  
"If you know yourself but not the enemy, for every victory gained you will also suffer a defeat." 
Things outside of your control are much more squishy than things under your control.  The data is often incomplete, and what is there is often unclear.  It is pretty normal for the info to be, as Clausewitz would say, "of doubtful character," and it is rarely structured in nice neat rows with data points helpfully organized with labelled columns.  Finally, in an adversarial environment at least, you have to assume that at least some of the info you do have is deceptive--that it has been put there intentionally by your enemy or competitor to put you off the track.

People frequently run from questions about things that are outside of their control.  The nature of the info available can often make these kinds of questions seem unresolvable, that no amount of thinking can lead to any greater clarity.

This is a mistake.  

Inevitably, in order to move forward with the things you do control, you have to come to some conclusions about the things you do not control.  A country's military looks very different if it expects the enemy to attack by sea vs. by land.  A company's marketing plan looks very different if it thinks its competitor will be first to market with a new type of product or if it will not.  Your negotiating strategy with a potential buyer of your house depends very much on whether you think the market in your area is hot or not.

The US military has a saying:  "Intelligence leads operations."  This is a shorthand way of driving home the point that your understanding of your environment, of what is happening around you, of the things outside of your control, determines what you do with the things under your control.  Whether you do this analysis in a structured, formal way or just go with your gut instinct, you always come to conclusions about your environment, about the things outside your control, before you act.  

Since you are going to do it anyway, wouldn't it be nice if there were some skills and tools you could learn to do it better?  It turns out that there are.  The last 20-30 years has seen an explosion in research about how to better understand the future for those things outside of our control.

More importantly, learning these skills and tools can probably help you understand things under your control better as well.  Things under your control often come with the same kinds of squishy data normally associated with things outside your control.  The opposite is much less likely to be true.  

Much of the rest of this series will focus on these tools and thinking skills, but first, we need to dig more deeply into the nature of the questions we ask about things outside our control and precisely why those questions are so difficult to answer.

(Next:  Why Are Questions About Things Outside Your Control So Difficult?)

Tuesday, July 30, 2019

How To Think About The Future (Part 1 -- Questions About Questions)

We don't think about the future; we worry about it.


Whether it's killer robots or social media or zero-day exploits, we love to rub our preferred, future-infused worry stone between our thumb and finger until it is either a thing of shining beauty or the death of us all (and sometimes both).  

This is not a useful approach.

Worry is the antithesis of thinking.  Worry is all about jumping to the first and usually the worst possible conclusion.  It induces stress.  It narrows your focus.  It shuts down the very faculties you need to think through a problem.  Worry starts with answers; thinking begins with questions.

What Are Your Questions?
“A prudent question is one-half of wisdom.”Francis Bacon
"The art of proposing a question must be held of higher value than solving it.”Georg Cantor
“If you do not know how to ask the right question, you discover nothing.”W. Edwards Deming
Given the importance of questions (and of asking the "right" ones), you would think that there would be more literature on the subject.  In fact, the question of questions is, in my experience, one of the great understudied areas.  A few years ago, Brian Manning and I took a stab at it and only managed to uncover how little we really know about how to think about, create, and evaluate questions.

For purposes of thinking about the future, however, I start with two broad categories to consider:  Speculative questions and meaningful questions.  

There is nothing wrong with a speculative question.  Wondering about the nature of things, musing on the interconnectedness of life, and even just staring off into space for a bit are time-honored ways to come up with new ideas and new answers.  We should question our assumptions, utilize methods like the Nominal Group Technique to leverage the wisdom of our collective conscious, and explore all of the other divergent thinking tools in our mental toolkits.  

Speculation does not come without risks, however.  For example, how many terrorist groups would like to strike inside the US?  Let's say 10.  How are they planning to do it?  Bombs, guns, drones, viruses, nukes?  Let's say we can come up with 10 ways they can attack.  Where will they strike?  One of the ten largest cities in the US?  Do the math--you already have 1000 possible combinations of who, what, and where.

How do we start to narrow this down?  Without some additional thinking strategies, we likely give in to cognitive biases like vividness and recency to narrow our focus.    Other aspects of the way our minds work--like working memory limitations--also get in the way.  Pretty soon, our minds, which like to be fast and certain even when they should be neither, have turned our 1 in 1000 possibility into a nice, shiny, new worry stone for us to fret over (and, of course, share on Facebook).

Meaningful questions are questions that are important to you--important to your plans, to your (or your organization's) success or failure.  Note that there are two criteria here.  First, meaningful questions are important.  Second, they are yours.  The answers to meaningful questions almost, by definition, have consequences.  The answers to these questions tend to compel decisions or, at least, further study.

It is entirely possible, however, to spend a lot of time on questions which are both of dubious relevance to you and are not particularly important.  The Brits have a lovely word for this, bikesheddingIt captures our willingness to argue for hours about what color to paint the bikeshed while ignoring much harder and more consequential questions.  Bikeshedding, in short, allows us to distract ourselves from our speculations and our worries and feel like we are still getting something done.


Next:  What do you control?

Thursday, July 25, 2019

Why The Next "Age of Intelligence" Scares The Bejesus Out Of Me

A little over a month ago, I wrote a post titled How To Teach 2500 Years Of Intelligence History In About An Hour.   The goal of that post was to explain how I taught the history of intelligence to new students. Included in that article was the picture below:


I am not going to cover all the details of the "Ages of Intelligence" approach again (you can see those at this link), but the basic idea is that there are four pretty clear ages.  In addition, I made the case that, driven by ever changing technology as well as corresponding societal changes, the length of these ages is getting logarithmicly shorter. 

Almost as an afterthought, I noted that the trend line formed by these ever shortening ages was approaching the X-intercept.  In other words, the time between "ages" was approaching zero.  In fact, I noted (glibly and mostly for effect) that we could well be in a new "Age of Intelligence" right now and not know it.

When I publish a piece like the one mentioned above, I usually feel good about it for about ten minutes.  After that, I start to think about all the stuff I could have said or where to go next with the topic.  In this case, the next step was obvious--a little speculative thinking about what comes, well, now.  What I saw was not pretty (and, to be frank, a little frightening).

Looking out 10 years, I see five hypotheses (The base rate, therefore, for each is 20%).  I will indicate what I think are the arguments for and against each hypothesis, and then, how I would adjust the probability from the base rate.  

The Age of Anarchy  
No one knows what is going on, no one knows what to do about it.  Technology just keeps changing and improving at an ever increasing pace, and no one person or even one organization (no matter how large) can keep up with it.  Strategic intelligence is worthless and even tactical intelligence has only limited utility.

Arguments for:  This is certainly what life feels like right now for many people.  Dylan Moran's rant probably captures this hypothesis far better than I could:




Arguments against:   This is a form of the same argument that has been made against every technological advance since the Ancient Greeks (Socrates, for example, was against writing because it "will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing..."  Replace "writing with "books" or "computers" or "cell phones" and you have another variation on this Luddite theme).  In short, every age has had to adjust to the risks and rewards new technologies bring.  The next age of intelligence is unlikely to be new in this respect.

Probability:  17%

Age of Irrelevance
Artificial intelligence (AI) takes over the world.  The algorithms get so good at understanding and predicting that we increasingly turn over both our intelligence production and our decisionmaking to the computers.  In this hypothesis, there is still a need to know the enemy there is just no longer a need for us to do all those tedious calculations in our tents.  The collection of intelligence information and the conduct of intelligence analysis becomes an entirely automated process.

Arguments for:  Even a cursory look at the Progress in Artificial Intelligence article in Wikipedia suggests two things.  First, an increasing number of complex activities where humans used to be the best in the world are falling victim to AI's steady march.  Second, humans almost always underestimate just how quickly machines will catch up to them.  Efforts by the growing number of surveillance states will only serve to increase the pace as they move their populations in the direction of the biases inherent in the programming or the data.  

Arguments against:  AI may be the future, but not now and certainly not in the next ten years.  Four polls of researchers done in 2012-13 indicated that that there was only a 50% chance of a technological singularity--where a general AI is as smart as a human--by 2040-2050.  The technology gurus at Gartner also estimated in 2018 that general artificial intelligence is just now beginning to climb the "hype cycle" of emerging technologies and is likely more than 10 years away.  The odds that this hypothesis becomes reality go up after ten years, however.

Probability:  7%

Age of Oligarchy
Zuckerberg, Gates, Nadella, Li, Bezos, Musk, Ma--their names are already household words.  Regular Joe's and Jane's (like you and me) get run over, while these savvy technogeeks rule the world.  If you ain't part of this new Illuminati, you ain't $h!t.  Much like the Age of Concentration, intelligence efforts will increasingly focus on these oligarchs and their businesses while traditional state and power issues take a back seat (See Snow Crash).

Arguments for:  92% of all searches go through Google, 47% of all online sales go through Amazon, 88% of all desktop and laptop computers run Windows.  These and other companies maintain almost monopoly-like positions within their industries.  By definition, the oligarchy already exists.

Arguments against:  Desktops and laptops may run on Windows but the internet and virtually all supercomputers--that is, the future--run on Linux based systems.  Browsers like Brave and extensions like Privacy Badger will also make it more difficult for these companies to profit from their monopoly positions.  In addition, an increasing public awareness of the privacy issues associated with placing so much power in these companies with so little oversight will expand calls for scrutiny and regulation of these businesses and their leaders.

Probability:  27%

Age of Ubiquity
We start to focus on our digital literacy skills.  We figure out how to spot liars and fakes and how to reward honest news and reviews.   We teach this to our children.  We reinforce and support good journalistic ethics and punish those who abandon these standards.  We all get smart.  We all become--have to become--intelligence analysts.

Arguments for:   Millennials and Gen Z are skeptical about the motives of big business and are abandoning traditional social media platforms in record numbers.  They are already digital natives, unafraid of technology and well aware of its risks and rewards.  These generations will either beat the system or disrupt it with new technologies.

Arguments against:   Human nature.  Hundreds of books and articles have been written in the last decade on how powerful the biases and heuristics hardwired into our brains actually are.  We are programmed to seek the easy way out, to value convenience over truth, and to deceive ourselves.  Those who do happen to figure out how to beat the system or disrupt it are likely to hold onto that info for their own economic gain, not disperse it to the masses.

Probability:  12%

Blindside Hypothesis
Something else, radically different than one of approaches above, is going to happen. 

Arguments for:   First, this whole darn article is premised on the idea that the "Ages of Intelligence" approach is legit and not just a clever pedagogical trick.  Furthermore, while there are lots of good, thoughtful sources regarding the future, many of them, as you can see above, contradict.  Beyond that:

  • This is a complex problem, and I generated this analysis on my own with little consultation with other experts.  
  • Complex problems have "predictive horizons"--places beyond which we cannot see--where we are essentially saying, "There is a 50% chance of x happening, plus or minus 50%."
  • I have been thinking about this on and off for a few weeks but have hardly put the massive quantities of time I should to be able to make these kinds of broad assessments with any confidence.  
  • The lightweight pro v. con form of my discussion adds only a soupçon of structure to my thinking.    
  • Finally, humans have a terrible track record of predicting disruption and I am decidedly human.  
Bottomline:  The odds are good that I am missing something.

Arguments against:  What?  What am I missing?  What reasonable hypothesis about the future, broadly defined, doesn't fall into one of the categories above? (Hint:  Leave your answer in the comments!)

Probability:  37% 

Why This Scares Me
Other than the rather small probability that we all wake up one morning and become the critical information collectors and analysts this most recent age seems to demand of us, there aren't any good outcomes.   I don't really want chaos, computers or a handful of profit-motivated individuals to control my digital and, as a result, non-digital life.  I also fully realize that, in some sense, this is not a new revelation.  Other writers, far more eloquent and informed than I, have been making some variation of this argument for years.  

This time, however, it is more personal.  Intelligence leads operations.  Understanding the world outside your organization's control drives how you use the resources under your control.  My new employer is the US Army and the US Army looks very different in the next ten years depending on which of these hypotheses becomes fact. 

Monday, July 22, 2019

I Made It!

I started my new job as Professor of Strategic Futures at the US Army War College last week.  So far, it has been a fairly predictable, if seemingly unending, series of orientations, mandatory trainings, and security briefings.  I don't mind.  To paraphrase Matthew, "What did I go into the Army to see?  A man running without a PT belt?"

What I have been impressed with is the extraordinary depth of knowledge and genuine collegiality of the faculty.  It is an interesting feeling to be constantly surrounded by world class experts in virtually any domain.

Equally impressive is the emphasis on innovation and experimentation.  I am surrounded by an example of this right now.  I am writing this post on one of a number of open access commercial network machines in the War College library.  In the back of the room, a professor is leading an after action review of an exercise built around Compass Games' South China Sea war game (BTW, if you think it odd that the Army would have students play a scenario which is largely naval in nature, you are missing my point about innovation and experimentation). 

Scattered throughout the rest of the library are recently acquired, odd-shaped pieces of furniture designed to create collaborative spaces, quiet spaces, and resting spaces (among others).  Forms soliciting feedback suggest that the library is working hard to figure out what kind of spaces its patrons want, and what kind of furniture and equipment would best support those needs.  In the very rear of the building, there is a room undergoing a massive reconstruction.  No telling what is about to go in there, but it is clear evidence that the institution is not standing still.  

I will continue to write here on Sources and Methods, of course.  I also hope to get a few things published on the War College's own online journal, The War Room  (Check it out if you haven't.  It's very cool). Other than that, I look forward to pursuing some of my old lines of research and adding a few new ones as well.

For those of you who want to contact me, you can call me in my office at 717-245-4665, email me at kristan dot j dot wheaton dot civ at mail dot mil or, as always, email me at kris dot wheaton at gmail dot com.  You can also message me on LinkedIn.

Monday, June 24, 2019

EPIC 2014: The Best/Worst Forecast Ever Made?

The eight minute film, EPIC 2014, made a huge impact on me when it was released in 2004.  If you have seen it before, it's worth watching it again.  If you haven't, let me set it up for you before you click the play button below.   


Put together by Robin Sloan and Matt Thompson way back in 2004, EPIC 2014 talked about the media landscape in 2014 as if it had already happened.  In other words, they invented a "Museum of Media History", and then pretended, in 2004, to look backward from 2014 as a way of exploring how they thought the media landscape would change from 2004 to 2014.  Watch it now; it will all make sense when you do:

 
In some ways, this is the worst set of predictions ever made.  Almost none of the point predictions are correct.  Google never merged with Amazon, Microsoft did not buy Friendster, The New York Times did not become a print-only publication for the elderly, and Sony's e-paper is not cheaper than real paper (It costs 700 bucks and gets an average of just 3 stars (on Sony's site!)).

Sloan and Thompson did foresee Google's suite of online software services but did not really anticipate competition from the likes of Facebook, Twitter, LinkedIn, YouTube or any of a host of other social media services that have come to dominate the last 15 years.

None of that seemed particularly important to me, however.  It felt like just a clever way to get my attention (and it worked!).  The important part of the piece was summed up near the end instead.  EPIC, Sloan and Thompson's name for the monopolized media landscape they saw by 2014, is: 
"...at its best and edited for the savviest readers, a summary of the world—deeper, broader and more nuanced than anything ever available before ... but at its worst, and for too many, EPIC is merely a collection of trivia, much of it untrue, all of it narrow, shallow, and sensational.  But EPIC is what we wanted, it is what we chose, and its commercial success preempted any discussions of media and democracy or journalistic ethics."
Switch out the word "EPIC" with the word "internet" and that still seems to me to be one of the best long-range forecasts I've ever seen.   You could throw that paragraph up on almost any slide describing the state of the media landscape today, and most of the audience would likely agree.  The fact that Sloan and Thompson were able to see it coming way back in 2004 deserves mad props.

It also causes me to wonder about the generalizability of the lessons learned from forecasting studies based on resolvable questions.  Resolvable questions (like "Will Google and Amazon merge by December 31, 2014?") are fairly easy to study (easier, anyway).  Questions which don't resolve to binary, yes/no, answers (like "What will the media landscape look like in 2014?") are much harder to study but also seem to be more important.  

We have learned a lot about forecasting and forecasting ability over the last 15 years by studying how people answer resolvable questions.  That's good.  We haven't done that before and we should have.  

Sloan and Thompson seemed to be doing something else, however.  They weren't just adding up the results of a bunch of resolvable questions to see deeper into the future.  There seems to me to be a different process involved.  I'm not sure how to define it.  I am not even sure how to study it.  I do think, that, until we can, we should be hesitant to over-apply the results of any study to real world analysis and analytic processes.