Monday, September 21, 2020

Cyber Teachers! Here's A Cool Resource You Should Know About...

A couple of my colleagues in the cyber department here at the Center for Strategic Leadership at the US Army War College have put together a very handy resource for anyone working or teaching cyber or cyber-related issues:  The Strategic Cyberspace Operations Guide.

Nothing in the guide should be particularly new to experienced cyber instructors.  It is still extraordinarily useful as it puts everything together in one package.  As the authors said themselves, "It combines U.S. Government Unclassified and Releasable to the Public documents into a single guide."  

The 164 page document contains six chapters:

  • Chapter 1 provides an overview of cyberspace operations, operational design methodology, and joint planning, and execution. 
  • Chapter 2 includes a review of operational design doctrine and applies these principles to the cyberspace domain. 
  • Chapter 3 reviews the joint planning process and identifies cyberspace operations planning concerns. 
  • Chapter 4 describes cyberspace operations during the execution of joint operations. 
  • Chapter 5 provides an overview of cyberspace operations in the homeland. 
  • Chapter 6 includes a case study on the Russian – Georgian conflict in 2008 with a focus on cyberspace operations.
I found the entire document to be very well edited and presented.  It was about as easy a read as this sort of thing can be.  Most importantly, it did the really hard work of getting it all into a single package.  Recommended!

(Reader's Note:  As always, the views expressed in this blog are my own random musings and do not represent any official positions.)

Tuesday, May 26, 2020

Book Review: Burn-In, A Glimpse Into The Future Of Man-Machine Teaming

(Note:  A colleague of mine, Kelly Ivanoff, came to me a few weeks ago with a review--a really well-written review--for the new thriller by Singer and Cole called Burn-In.  I don't have a lot of guest bloggers, but I knew that SAM's audience would be interested in the book, and I told Kelly I would be happy to publish the review.  Over the next couple of weeks, Kelly got me an advance copy of the book, and I have been reading it myself (I knew 12 years of blogging would have to be good for something, someday...).  

So, who is Kelly Ivanoff and what qualifies him to comment on the future of AI, machine learning and robots?  Check this bio out:

Colonel Kelly Ivanoff presently serves at the United States Army War College.  His previous assignment was as the Executive Officer to the Director, Army Capabilities Integration Center (ARCIC), the predecessor of today’s Army Futures Command.  He’s a veteran of three combat deployments and has four years of experience specifically working future force-related efforts including concept development and force design.
Boom.  Mic drop.  Let's get to the review...Oh, and none of this is the official position of the Department of Defense or the Army.  It's all just Kelly, me, and our opinions.  Also, I'll add my two cents on the book after you're done reading what Kelly has to say.

By Kelly Ivanoff

The United States Army sees great potential in artificial intelligence and robotics to significantly impact outcomes in future combat operations.  Army General John “Mike” Murray was recently quoted in Breaking Defense, “If you’re talking about future ground combat, you’re not talking tens of thousands of sensors…We’ve got that many in Afghanistan, right now. You’re talking hundreds of thousands if not millions of sensors.” Murray later wondered, “How do you make sense of all that data for human soldiers and commanders?”  His answer:  machine learning and artificial intelligence.

Best-selling authors P.W. Singer and August Cole must have the same convictions as senior Army leaders.  Their new book, Burn-In is a riveting work of fiction, set approximately ten to fifteen years in the future, with real world, present-day implications concerning the great potential of robotics, artificial intelligence, and man-machine teaming.  They offer prophetic examples of how the military might harness and exploit the potential of these evolving technologies to improve situational understanding, “make sense of all that data,” and make better decisions.  Importantly, they vividly describe scenarios that stimulate imagination and allow consideration of challenges similar to those prioritized by General Murray and his team at Army Futures Command.

Burn-In presents the story of FBI agent Laura Keegan, a former United States Marine Corps robot handler, who is tasked to team with a robot partner to test the limits of man-machine teaming; in other words, conduct a ‘burn-in”.  Beginning with a series of controlled experiments and exercises Keegan attempts to better understand the advanced robot she’s been provided; a TAMS (tactical autonomous mobility system).  The tests are designed to explore the robot’s physical agility and its ability to learn and, as a result, improve its own capability.  The tests also challenge Agent Keegan to expand her imagination for the employment of robots and build her trust in artificial intelligence and machine autonomous operations.  The tests are halted due to a series of what seem to be unrelated disasters that inflict great damage and kill thousands of people in the national capital region.  It quickly becomes apparent the disasters were no accident.  In response, Keegan and TAMS embark on a thrilling, action-packed race to identify, locate, and stop a revenge-motivated murderer who caused the destruction.  Through this mentally and environmentally stressful period Agent Keegan overcomes her biases and comes to embrace man-machine teaming and the use of artificial intelligence in problem solving and decision making.  Ultimately, through their portrayal of this fictional story, Singer and Cole reveal numerous real-world opportunities and challenges surely inherent in our near future.  

Burn-In is much more than just a riveting story.  Singer and Cole creatively advance important concepts about the use of robotics and artificial intelligence in defense and security-related professions.  Much can be learned from their work.  Burn-In brilliantly describes example scenarios pertaining to three of the four “initial thrusts” of the Army’s newly established Artificial Intelligence Task Force; those three being Intelligence Support, Automated Threat Recognition, and Predictive Maintenance (the fourth being Human Resources / Talent Management).  The authors also provide examples related to all of the additional Areas of Interest identified in a recent call for whitepapers issued by the Army Artificial Intelligence Task Force.  Burn-In is important for the vividly described problem-centered scenarios and the conceptual solutions offered.  

Burn-In is an exceptional read and it should be a centerpiece in the library of aspiring senior military leaders, defense officials, and those involved in military modernization efforts.  Its value lies in its description of the world as it will be.  Just as the scientist and author Isaac Asimov once argued, “It is change, continuing change, inevitable change, that is the dominant factor in society today.  No sensible decision can be made any longer without taking into account not only the world as it is, but the world as it will be”.  For this reason military leaders and those engaged in the development of military technologies and operational doctrines should read this book.  It will stimulate ideas about the future operational environment and offer conceptual solutions to the inherent challenges.  Beyond the aforementioned professional reasons, read Burn-In for the sheer enjoyment of a well told story.  It will not disappoint.    

My two cents:  I like the book, too!  It reminds me of some the early work by Tom Clancy or Ralph Peters (my favorites!), and I suspect it will have that same kind of effect on military and government professionals that read it.  

Thursday, March 5, 2020

The Coronavirus Chart That Scares Me The Most


There are lots of sites that track the coronavirus, COVID-19.  One of my favorites is the one put together by Johns Hopkins.  There is lots of data there, but the chart that scares me the most is buried in the bottom right corner of the site.  The default view shows the actual number of cases reported from mainland China, from the rest of the world, and then, more hopefully, the number of people who have fully recovered.  

It's a good chart but not the one that frightens me.  You have to click the little tab that says "logarithmic" to get to the one that makes my hair a little more grey.  If you then turn off the "Mainland China" button and the "Total Recovered" button, you get the chart that sends me running for Purel and a face mask.  You can see what it looks like at the top of the page.

It shows the number of cases worldwide outside of China.  What makes it so frightening is that it is a logarithmic scale.  That means that the Y-axis doesn't increase by equal steps.  Instead, each increase represents a ten-fold increase in whatever you are measuring.  In other words, you aren't counting 1, 2, 3.  You are counting 10, 100, 1000.

If you mouse over the yellow dots you can see the dates certain milestones were hit.  For example, the world hit 100 (10 X 10) cases (plus a few) outside of China on January 29, 2020.  See the picture below:


 About 19 days later, we hit 1000 (10 X 10 X 10) cases (See below):


Then, only 13 days after that, we hit 10,000 cases (10 X 10 X 10 X 10):


Unchecked, this implies that there will likely be 100,000 cases outside of China by about March 17, 2020 and--here's the shocker--a million cases by the end of the month.  You can do the math after that.

Unchecked.  That's the operative word in the last sentence.  China got to about 80,000 cases before they managed to turn the corner.  To get there meant taking extreme measures (like closing down a city larger than New York).

It's hard for me to imagine it getting that bad, that quickly, but that's what scares me--the math don't lie.

Thursday, January 2, 2020

How To Think About The Future: A Graphic Prologue

(Note:  I have been writing bits and pieces of "How To Think About the Future" for some time now and publishing those bits and pieces here for early comments and feedback.  As I have been talking to people about it, it has become clear that there is a fundamental question that needs to be answered first:  Why learn to think about the future?

Most people don't really understand that thinking about the future is a skill that can be learned--and can be improved upon with practice.  More importantly, if you are making strategic decisions, decisions about things that are well outside your experience, or decisions under extreme uncertainty, being skilled at thinking about the future can significantly improve the quality of those decisions.  Finally, being able to think effectively about the future allows you to better communicate your thoughts to others.  You don't come across as someone who "is just guessing."    

I wanted to make this case visually (mostly just to try something new).  Randall Munroe (XKCD) and Jessica Hagy (Indexed) both do it much better of course, but a tip of the hat to them for inspiring the style below.  It is a very long post, but it is a quick read; just keep scrolling!

As always, thanks for reading!  I am very interested in your thoughts on this...)































Monday, November 18, 2019

Chapter 2: In Which The Brilliant Hypothesis Is Confounded By Damnable Data

"Stop it, Barsdale!  You're introducing confounds into my experiment!"
A little over a month ago, I wrote a post that asked if the form of an estimative statement mattered in terms of communicating its content with regard to analytic confidence.  Specifically, I asked people to determine which of the following was "more clear" in response to the question, "Do you think the Patriots will win this week?":
  • "It's a low confidence estimate, but the Patriots are very likely to win this week."
  • "The Patriots are very likely to win this week.  This is a low confidence estimate, however."
I posted this as an informal survey and 72 people kindly took the time to take it.  Here are the results:



At first glance, the results appear to be less than robust.  The difference measured here is unlikely to be statistically significant.  Even if it is, the effect size does not appear to be that large.  The one thing that seems clear is that there is no clear preference.

Or is there?


Just like every PHD candidate who ever got disappointing results from an experiment, I have spent the last several weeks trying to rationalize the results away--to find some damn lipstick and get it on this pig!


I think I finally found something which soothes my aching ego a bit.  The fundamental assumption of these kinds of survey questions is that, in theory, both answers are equally likely.  Indeed, this sort of A/B testing is done precisely because the asker does not know which one the client/customer/etc. will prefer.

This assumption might not hold in this case.  Statements of analytic confidence are, in my experience, rare in any kind of estimative work (although they have become a bit more common in recent years).  When they are included, however, they are almost always included at the end of the estimate.  Indeed, one of those who took the survey (and preferred the first statement above) commented that putting the statement of analytic confidence at the end, "is actually how it would be presented in most IC agencies, but whipsaws the reader."

How might the comfort of this familiarity change the results?  On the one hand, I have no knowledge of who took my survey (though most of my readers seem to be at least acquainted in passing with intelligence and estimates).  On the other hand, there is some pretty good evidence (and some common sense thinking) that documents the power of the familiarity heuristic, or our preference for the familiar over the unfamiliar.  In experiments, the kind of thing that can throw your results off is known as a confound.

More important than familiarity with where the statement of analytic confidence traditionally goes in an estimate, however, might be another rule of estimative writing and another confound:  BLUF.

Bottomline Up Front (or BLUF) style writing is a staple of virtually every course on estimative or analytic writing.  "Answer the question and answer it in the first sentence" is something that is drummed into most analysts' heads from birth (or shortly thereafter).  Indeed, the single most common type of comment from those that preferred the version with the statement of analytic confidence at the end was, as this one survey taker said, "You asked about the Patriots winning - the...response mentions the Patriots - the topic - within the first few words."
Note:  Ellipses seem important these days and the ones in the sentence above mark where I took out the word "first."  I randomized the two statements in the survey so that they did not always come up in the same order.  Thus, this particular responder saw the second statement above (the one with the statement of analytic confidence at the end) first.
If the base rate of the two answers is not 50-50 but rather 40-60 (or worse in favor of the more familiar, more BLUFy answer) then these results could easily become very significant.  It would be like winning a football game you were expected to lose by 35 points!

Thus, like all good dissertations, the only real conclusion I have come to is that the "topic needs more study."

Joking aside, it is an important topic.  As you likely know, it is not enough to just make an estimate.  It is also important to include a statement of analytic confidence.  To do anything less in formal estimates is to be intellectually dishonest to whoever is making real decisions based on your analysis.  I don't think that anyone would disagree that form can have a significant impact on how the content is received.  The real questions are how does form impact content and to what degree?  Getting at those questions in the all important area of formal estimative writing is truly something well-worth additional study.