Tuesday, May 26, 2020

Book Review: Burn-In, A Glimpse Into The Future Of Man-Machine Teaming

(Note:  A colleague of mine, Kelly Ivanoff, came to me a few weeks ago with a review--a really well-written review--for the new thriller by Singer and Cole called Burn-In.  I don't have a lot of guest bloggers, but I knew that SAM's audience would be interested in the book, and I told Kelly I would be happy to publish the review.  Over the next couple of weeks, Kelly got me an advance copy of the book, and I have been reading it myself (I knew 12 years of blogging would have to be good for something, someday...).  

So, who is Kelly Ivanoff and what qualifies him to comment on the future of AI, machine learning and robots?  Check this bio out:

Colonel Kelly Ivanoff presently serves at the United States Army War College.  His previous assignment was as the Executive Officer to the Director, Army Capabilities Integration Center (ARCIC), the predecessor of today’s Army Futures Command.  He’s a veteran of three combat deployments and has four years of experience specifically working future force-related efforts including concept development and force design.
Boom.  Mic drop.  Let's get to the review...Oh, and none of this is the official position of the Department of Defense or the Army.  It's all just Kelly, me, and our opinions.  Also, I'll add my two cents on the book after you're done reading what Kelly has to say.

By Kelly Ivanoff

The United States Army sees great potential in artificial intelligence and robotics to significantly impact outcomes in future combat operations.  Army General John “Mike” Murray was recently quoted in Breaking Defense, “If you’re talking about future ground combat, you’re not talking tens of thousands of sensors…We’ve got that many in Afghanistan, right now. You’re talking hundreds of thousands if not millions of sensors.” Murray later wondered, “How do you make sense of all that data for human soldiers and commanders?”  His answer:  machine learning and artificial intelligence.

Best-selling authors P.W. Singer and August Cole must have the same convictions as senior Army leaders.  Their new book, Burn-In is a riveting work of fiction, set approximately ten to fifteen years in the future, with real world, present-day implications concerning the great potential of robotics, artificial intelligence, and man-machine teaming.  They offer prophetic examples of how the military might harness and exploit the potential of these evolving technologies to improve situational understanding, “make sense of all that data,” and make better decisions.  Importantly, they vividly describe scenarios that stimulate imagination and allow consideration of challenges similar to those prioritized by General Murray and his team at Army Futures Command.

Burn-In presents the story of FBI agent Laura Keegan, a former United States Marine Corps robot handler, who is tasked to team with a robot partner to test the limits of man-machine teaming; in other words, conduct a ‘burn-in”.  Beginning with a series of controlled experiments and exercises Keegan attempts to better understand the advanced robot she’s been provided; a TAMS (tactical autonomous mobility system).  The tests are designed to explore the robot’s physical agility and its ability to learn and, as a result, improve its own capability.  The tests also challenge Agent Keegan to expand her imagination for the employment of robots and build her trust in artificial intelligence and machine autonomous operations.  The tests are halted due to a series of what seem to be unrelated disasters that inflict great damage and kill thousands of people in the national capital region.  It quickly becomes apparent the disasters were no accident.  In response, Keegan and TAMS embark on a thrilling, action-packed race to identify, locate, and stop a revenge-motivated murderer who caused the destruction.  Through this mentally and environmentally stressful period Agent Keegan overcomes her biases and comes to embrace man-machine teaming and the use of artificial intelligence in problem solving and decision making.  Ultimately, through their portrayal of this fictional story, Singer and Cole reveal numerous real-world opportunities and challenges surely inherent in our near future.  

Burn-In is much more than just a riveting story.  Singer and Cole creatively advance important concepts about the use of robotics and artificial intelligence in defense and security-related professions.  Much can be learned from their work.  Burn-In brilliantly describes example scenarios pertaining to three of the four “initial thrusts” of the Army’s newly established Artificial Intelligence Task Force; those three being Intelligence Support, Automated Threat Recognition, and Predictive Maintenance (the fourth being Human Resources / Talent Management).  The authors also provide examples related to all of the additional Areas of Interest identified in a recent call for whitepapers issued by the Army Artificial Intelligence Task Force.  Burn-In is important for the vividly described problem-centered scenarios and the conceptual solutions offered.  

Burn-In is an exceptional read and it should be a centerpiece in the library of aspiring senior military leaders, defense officials, and those involved in military modernization efforts.  Its value lies in its description of the world as it will be.  Just as the scientist and author Isaac Asimov once argued, “It is change, continuing change, inevitable change, that is the dominant factor in society today.  No sensible decision can be made any longer without taking into account not only the world as it is, but the world as it will be”.  For this reason military leaders and those engaged in the development of military technologies and operational doctrines should read this book.  It will stimulate ideas about the future operational environment and offer conceptual solutions to the inherent challenges.  Beyond the aforementioned professional reasons, read Burn-In for the sheer enjoyment of a well told story.  It will not disappoint.    

My two cents:  I like the book, too!  It reminds me of some the early work by Tom Clancy or Ralph Peters (my favorites!), and I suspect it will have that same kind of effect on military and government professionals that read it.  

Thursday, March 5, 2020

The Coronavirus Chart That Scares Me The Most


There are lots of sites that track the coronavirus, COVID-19.  One of my favorites is the one put together by Johns Hopkins.  There is lots of data there, but the chart that scares me the most is buried in the bottom right corner of the site.  The default view shows the actual number of cases reported from mainland China, from the rest of the world, and then, more hopefully, the number of people who have fully recovered.  

It's a good chart but not the one that frightens me.  You have to click the little tab that says "logarithmic" to get to the one that makes my hair a little more grey.  If you then turn off the "Mainland China" button and the "Total Recovered" button, you get the chart that sends me running for Purel and a face mask.  You can see what it looks like at the top of the page.

It shows the number of cases worldwide outside of China.  What makes it so frightening is that it is a logarithmic scale.  That means that the Y-axis doesn't increase by equal steps.  Instead, each increase represents a ten-fold increase in whatever you are measuring.  In other words, you aren't counting 1, 2, 3.  You are counting 10, 100, 1000.

If you mouse over the yellow dots you can see the dates certain milestones were hit.  For example, the world hit 100 (10 X 10) cases (plus a few) outside of China on January 29, 2020.  See the picture below:


 About 19 days later, we hit 1000 (10 X 10 X 10) cases (See below):


Then, only 13 days after that, we hit 10,000 cases (10 X 10 X 10 X 10):


Unchecked, this implies that there will likely be 100,000 cases outside of China by about March 17, 2020 and--here's the shocker--a million cases by the end of the month.  You can do the math after that.

Unchecked.  That's the operative word in the last sentence.  China got to about 80,000 cases before they managed to turn the corner.  To get there meant taking extreme measures (like closing down a city larger than New York).

It's hard for me to imagine it getting that bad, that quickly, but that's what scares me--the math don't lie.

Thursday, January 2, 2020

How To Think About The Future: A Graphic Prologue

(Note:  I have been writing bits and pieces of "How To Think About the Future" for some time now and publishing those bits and pieces here for early comments and feedback.  As I have been talking to people about it, it has become clear that there is a fundamental question that needs to be answered first:  Why learn to think about the future?

Most people don't really understand that thinking about the future is a skill that can be learned--and can be improved upon with practice.  More importantly, if you are making strategic decisions, decisions about things that are well outside your experience, or decisions under extreme uncertainty, being skilled at thinking about the future can significantly improve the quality of those decisions.  Finally, being able to think effectively about the future allows you to better communicate your thoughts to others.  You don't come across as someone who "is just guessing."    

I wanted to make this case visually (mostly just to try something new).  Randall Munroe (XKCD) and Jessica Hagy (Indexed) both do it much better of course, but a tip of the hat to them for inspiring the style below.  It is a very long post, but it is a quick read; just keep scrolling!

As always, thanks for reading!  I am very interested in your thoughts on this...)































Monday, November 18, 2019

Chapter 2: In Which The Brilliant Hypothesis Is Confounded By Damnable Data

"Stop it, Barsdale!  You're introducing confounds into my experiment!"
A little over a month ago, I wrote a post that asked if the form of an estimative statement mattered in terms of communicating its content with regard to analytic confidence.  Specifically, I asked people to determine which of the following was "more clear" in response to the question, "Do you think the Patriots will win this week?":
  • "It's a low confidence estimate, but the Patriots are very likely to win this week."
  • "The Patriots are very likely to win this week.  This is a low confidence estimate, however."
I posted this as an informal survey and 72 people kindly took the time to take it.  Here are the results:



At first glance, the results appear to be less than robust.  The difference measured here is unlikely to be statistically significant.  Even if it is, the effect size does not appear to be that large.  The one thing that seems clear is that there is no clear preference.

Or is there?


Just like every PHD candidate who ever got disappointing results from an experiment, I have spent the last several weeks trying to rationalize the results away--to find some damn lipstick and get it on this pig!


I think I finally found something which soothes my aching ego a bit.  The fundamental assumption of these kinds of survey questions is that, in theory, both answers are equally likely.  Indeed, this sort of A/B testing is done precisely because the asker does not know which one the client/customer/etc. will prefer.

This assumption might not hold in this case.  Statements of analytic confidence are, in my experience, rare in any kind of estimative work (although they have become a bit more common in recent years).  When they are included, however, they are almost always included at the end of the estimate.  Indeed, one of those who took the survey (and preferred the first statement above) commented that putting the statement of analytic confidence at the end, "is actually how it would be presented in most IC agencies, but whipsaws the reader."

How might the comfort of this familiarity change the results?  On the one hand, I have no knowledge of who took my survey (though most of my readers seem to be at least acquainted in passing with intelligence and estimates).  On the other hand, there is some pretty good evidence (and some common sense thinking) that documents the power of the familiarity heuristic, or our preference for the familiar over the unfamiliar.  In experiments, the kind of thing that can throw your results off is known as a confound.

More important than familiarity with where the statement of analytic confidence traditionally goes in an estimate, however, might be another rule of estimative writing and another confound:  BLUF.

Bottomline Up Front (or BLUF) style writing is a staple of virtually every course on estimative or analytic writing.  "Answer the question and answer it in the first sentence" is something that is drummed into most analysts' heads from birth (or shortly thereafter).  Indeed, the single most common type of comment from those that preferred the version with the statement of analytic confidence at the end was, as this one survey taker said, "You asked about the Patriots winning - the...response mentions the Patriots - the topic - within the first few words."
Note:  Ellipses seem important these days and the ones in the sentence above mark where I took out the word "first."  I randomized the two statements in the survey so that they did not always come up in the same order.  Thus, this particular responder saw the second statement above (the one with the statement of analytic confidence at the end) first.
If the base rate of the two answers is not 50-50 but rather 40-60 (or worse in favor of the more familiar, more BLUFy answer) then these results could easily become very significant.  It would be like winning a football game you were expected to lose by 35 points!

Thus, like all good dissertations, the only real conclusion I have come to is that the "topic needs more study."

Joking aside, it is an important topic.  As you likely know, it is not enough to just make an estimate.  It is also important to include a statement of analytic confidence.  To do anything less in formal estimates is to be intellectually dishonest to whoever is making real decisions based on your analysis.  I don't think that anyone would disagree that form can have a significant impact on how the content is received.  The real questions are how does form impact content and to what degree?  Getting at those questions in the all important area of formal estimative writing is truly something well-worth additional study.

Tuesday, October 1, 2019

Analytic Confidence And The New England Patriots: A Hypothesis

"Don't try to stop me!  I'm having a thought!"  (Image Source)
I was driving to work this morning, thinking about analytic confidence (as one does), and I had a thought.  An interesting thought.  Before I tell you what it was, you need to take the one question survey at the link below to see if my thought has any merit (I will post the results as a follow-up to this post):

Which statement seems more clear to you?

Did you take the survey?  If not, go back and take it!

And now?  

OK!  Thanks!

People are often confused by the difference between an estimate and confidence in that estimate.  This confusion is driven, to a very large part, by the way the terms are often (mis)used in formal analytic writing.  It is not uncommon to see someone talk about their confidence when they are really making an estimate or, less commonly, to use estimative language to convey confidence.  

The two concepts, however, are very different.  The estimate communicates what you think is likely (or unlikely) to happen in the future.  Confidence speaks to the likelihood that something is mucking up the process used to establish that estimate.  

This is where the New England Patriots come in.  For example, I think it is very likely that the New England Patriots will win their next game (Note:  I am using the term "very likely" here the same way the DNI does).  I watch football but am by no means an expert.  I don't even know who the Patriots are playing next week.  I just know that they are usually a good team, and that they usually win a lot of games.  So, while I still think it is very likely that the Patriots will win, my confidence in that estimate is low.  The process I used for deriving that estimate was so weak, I won't be surprised to find out that they have a bye next week.

On the other hand, it is easy to imagine a forecaster who is steeped in football lore.  This hypothetical forecaster has an excellent track record derived in large part from a highly structured and efficient process for determining the odds of a victory.  This forecaster might say exactly the same thing I did--the Patriots are very likely to win their next game--but, because of a superior process, this forecaster has high confidence in their estimate.

It is important to convey both--the estimate itself and analytic confidence--when communicating the results of analysis to a decisionmaker.  To do anything less runs the risk of the decisionmaker misinterpreting the findings or assuming things about the process that are not true.  

It is also important to note that the "analytic confidence" mentioned here differs significantly from the far more commonly discussed notion of psychological confidence.  Psychological confidence is a statement about how one "feels" and can often be caused by cognitive bias or environmental factors.  There is no reliable relationship between forecasting accuracy and psychological confidence.  

Analytic confidence, on the other hand, is based on legitimate reasons why the analysis is more likely to be correct.  For example, analysis derived from facts presented by reliable sources is more likely to be correct than analysis derived from sketchy or disreputable sources.  In fact, there are a number of legitimate reasons for more rather than less analytic confidence (you can read about them here).

It is, of course, possible, for analytic and psychological notions of confidence to be consistent, at least in the context of an individual forecast.  I, for example, "feel" that I have no reason to be confident in my estimate about the Patriots.  I also know, as I go down the list of elements responsible for legitimate analytic confidence, that very few are present.  Low applies to both my psychological and analytic variants of confidence, in this case.

That is not normal.  Overconfidence bias is typically the cause of feelings of confidence outpacing a more rational assessment of the quality of the analytic process.  Underconfidence, on the other hand, is typically caused by over-thinking a problem and is more common among experts than you might think.

Now to my thought.  Finally.

One of the big problems with analytic confidence is communicating it to decisionmakers in an intuitive way.  Part of this problem occurs, no doubt, because of the different meanings the word "confidence" can have.  Most people, when they hear the word "confidence" used in casual conversation, assume you mean the psychological kind.  Adding the word "analytic" in front of "confidence" doesn't seem to help much, as most people don't really have a notion of what analytic confidence is or how it differs from the more commonly used, psychological type of confidence (They don't want to know, either.  They have enough to remember).

The classic solution has been to ignore analytic confidence completely.  This is wrong for all the reasons discussed above.  Occasionally, however, analysts elect to include a statement of analytic confidence, typically at the end of the analysis.  Part of this is due to the "Bottomline Up Front (BLUF)" style of writing that is common to analysis.  The logic here is that the most important thing is the estimate.  That becomes the bottomline and, therefore, the first thing mentioned in the paper or briefing.

What if we flip that on its head?  What if we go, at least in casual conversation, with the analytic confidence first?  

Thus you had my two formulations:
  • "It's a low confidence estimate, but the Patriots are very likely to win this week."
  • "The Patriots are very likely to win this week.  This is a low confidence estimate, however."
These two statements say exactly the same thing in terms of content.  However, I think the form of the first statement better communicates what the analyst actually intends.  In other words, I think the first statement establishes a slightly different context.  Furthermore, I think this context will likely help the listener interpret my use of the word "confidence" correctly.  That is, the first statement is better than the second at suggesting that I am using confidence as a way to highlight the process I used to derive the estimate and not just how I feel about it.  

Another reason I think the second statement is inferior is because I think it sounds confusing to the casual listener.  It is theoretically better (the bottomline is definitely up front) but, unless you are steeped in the arcana of analytic writing, it cannot be easily interpreted and could lead to confusion.

That's the reason for the quick poll.  I just wanted to see what you thought--to see, in the words of Gertrude Stein, if there was any there there.  

Thanks and I will post what I found (and my inevitably shocked reaction to it) in a later post.