Thursday, July 19, 2018

How To Write A Mindnumbingly Dogmatic (But Surprisingly Effective) Estimate (Part 2 - Nuance)

In my last post on this topic, I outlined what I considered to be a pretty good formula for a pretty good estimate:

  • Good WEP +
  • Nuance +
  • Due to's +
  • Despite's +
  • Statement of AC = 
  • Good estimate!
I also talked about the difference between good WEPs, bad WEPs and best WEPs and if you are interested in all that, go back and read it.  What I intend to talk about today is the idea of nuance in an estimate.

Outline of the series so far (Click for full page version)
Let me give you an example of what I mean:  
  • The GDP of Yougaria is likely to grow.
  • The GDP of Yougaria is likely to grow by 3-4% over the next 12 months.
Both of these are estimates and both of these use good WEPs but one is obviously better than the other.  Why?  Nuance.

Mercyhurst Alum Mike Lyden made a stab at defining what we mean by "nuance" in his 2007 thesis, The Efficacy of Accelerated Analysis in Strategic Level Intelligence Estimates.  There he defined it as how many of the basic journalistic questions (Who, What, When, Why, Where, and How) the estimate addressed.  

For example, Mike would likely give the first estimate above a nuance score of 1.  It really only answers the "What" question.  I think he would give the second estimate a 3 as it appears to answer not only the "What" question but also the "When" and "How (or how much)" questions as well.  Its not a perfect system but it makes the point.

In general, I think it is obvious that more nuance is better than less.  A more nuanced estimate is more likely to be useful and it is less likely to be misinterpreted.  There are some issues that crop up and need to be addressed, however - nuances to the nuance rule, if you will.
  • What if I don't have the evidence to support a more nuanced estimate?  Look at the second estimate above.  What if you had information to support a growing economy but not enough information (or too much uncertainty in the information you did have) to make an estimate regarding the size and time frame for that growth?  I get it.  You wouldn't feel comfortable putting numbers and dates to this growth.  What would you feel comfortable with?  Would you be more comfortable with an adverb ("grow moderately")?  Would you be more comfortable with a date range ("over the next 6 to 18 months")?  Is there a way to add more nuance in any form with which you can still be comfortable as an analyst?  The cardinal rule here is to not add anything that you can't support with facts and analysis - that you are not willing to personally stand behind.  If, in the end, all you are comfortable with is "The economy is likely to grow" then say that.  I think, however, if you ponder it for a while, you may be able to come up with another formulation that addresses the decisionmaker's need for nuance and your need to be comfortable with your analysis.
  • What if the requirement does not demand a nuanced estimate?  What if all the decisionmaker needed to know was whether the economy of Yougaria was likely to grow?  He/She doesn't need to know any more to make his/her decision.  In fact, spending time and effort to add nuance would actually be counterproductive.  In this case, there is no need to add nuance.  Answer the question and move on.  That said, my experience suggests that this condition is rather more rare than not.  Even when DMs say they just need a "simple" answer, they often actually needs something, well, more nuanced.  Whether this is the case or not is something that should be worked out in the requirements process.  I am currently writing a three part series on this and you can find Part 1 here and Part 2 here.  Part 3 will have to wait until a little later in the summer.
  • What if all this nuance makes my estimate sound clunky?  So, yeah.  An estimate with six clauses in it is going to be technically accurate and very nuanced but sound as clunky and awkward as a sentence can sound.  Well-written estimates fall at the intersection of good estimative practice and good grammar.  You can't sacrifice either, which is why they can be very hard to craft.  The solution is, of course, to either refine your single estimative sentence or to break up the estimative sentence into several sentences.  In my next post on this, where I will talk about "due to's and "despite's", I will give you a little analytic sleight of hand that can help you with this problem.

Monday, July 16, 2018

Farengar Secret-Fire Has A Quest For You! Or What Video Games Can Teach Us About Virtual Intel Requirements

A couple of weeks ago, I wrote a post about the 3 Things You Must Know Before You Discuss Intelligence Requirements With A Decisionmaker.  That post was designed for intel professionals who have the luxury of being able to sit down with the decisionmakers they support and have a conversation with them about what it is they want from their intelligence unit.  I also stated that doing this in a virtual environment or on an automated requirements management system like COLISEUM was both more difficult and something I would discuss in the future.

Well, today is your lucky day!  The future is here!


When I think about who does requirements best in a virtual environment, I think about video games.  Particularly, I think about massively multi-player online role-playing games (MMORPGs for short).  Very, very specifically, I think about the questing systems that are standard fare in virtually all of these types of games.  

Quests are the requirements statements of games.  I have included an example of a quest below (it is from a game called Skyrim which is awesome and highly recommended).  These quests differ from intel requirements in that almost all of them are operationally focused (What do we need to do to accomplish our mission?) instead of intelligence focused (What do we need to know about the other guy to accomplish our mission?).  That said, there are still a number of things we can learn from a well-formulated quest that will make intel requirements in a virtual environment easier to craft and to understand.

Retrieved from Immersive Questing
Video game designers know they have to get quests right the first time.  They don't have an opportunity to talk to the players outside the game so all the necessary information needs to be in the quest itself.  On the other hand, they have to make the quest seem realistic.  Failing to maintain this balance runs the risk of creating an unplayable game.  As a result, video game designers have developed a number of conventions that allow the quests to sound real but be complete.  Intel has the "real" part down so it is about making sure that it is complete that matters.  In this respect, the final version of a good virtual intel requirement bears a remarkable resemblance to the final version of a good quest.  Here are the specifics:

  • They both provide background.  Why am I doing this?  What is the context for this quest?  In video games, putting the quest in context allows the story to unfold.  In intelligence work, this context allows the intelligence professional to better understand the decisionmaker's intent.  This, in turn, allows the intelligence professional to have a better understanding of the kinds of information and estimates that will prove most useful.
  • They both define terms.  In the quest above, I am to look for a Dragonstone.  What is a Dragonstone?  The quest defines that for me.  In intelligence work, agreeing on definitions of terms (particularly common terms) is incredibly helpful.  For example, you get a request to do a stability study on Ghana.  What term needs to be defined before you go ahead?  Stability.  We do this exercise every year in our intro classes.  There are multiple definitions of stability out there.  Which one is most appropriate for this decisionmaker is a critical question to ask and answer before proceeding.
  • They both use terms consistently.  If I encounter another quest asking for me to find a Dragonstone, I can count on it being the same thing I am looking for in this quest.  Likewise, in an intelligence requirement, if I define a term in a certain way in one place, I will use that term - not what I think is a synonym, no matter how reasonable it sounds to me - consistently throughout the requirement.  
  • They both often come in standard formats.  All video game players are familiar with a variety of standard quest formats such as the Fetch Quest (like the one above) where the task is to go, get something, and bring it back.  Intelligence requirements also come in more-or-less standard forms such as requests for descriptive or estimative intelligence.  Categorizing requests for intelligence and then studying them for similarities should allow an intelligence unit to develop a list of useful questions to ask based simply on the type of request it is.   

Requirements statements, whether managed in person or virtually, are almost always going to start out messy.  Without the advantage of a back-and-forth, personal conversation, the virtual requirements process has a greater potential, however, for breakdown.  Thinking of the requirement as a quest allows intelligence professionals to re-frame the process and focus on the essential elements of the requirement and, perhaps,  anticipate and address predictable points of potential failure in advance.

Look for the final part of this series later this summer when I talk about all the things you need to think about in the middle of requirements discussion with a decisionmaker!

Tuesday, July 10, 2018

How To Write A Mindnumbingly Dogmatic (But Surprisingly Effective) Estimate

At the top end of the analytic art sits the estimate.  While it is often useful to describe, explain, classify or even discuss a topic, what, as Sun Tzu would say, "enables the wise sovereign and the good general to strike and conquer, and achieve things beyond the reach of ordinary men, is foreknowledge."  Knowing what is likely (or unlikely) to happen is much more useful when creating a plan than only knowing what is happening.

How to Write a Mindnumbingly Dogmatic (but Surprisingly Effective) Estimate (Outline)
Estimates are like pizza, though.  There are many different ways to make them and many of those ways are good.  However, with our young analysts, just starting out in the Mercyhurst program, we try to teach them one good, solid, never fail way to write an estimate.  You can sort of think of it as the pepperoni pizza of estimates.

Here's the formula:

  • Good WEP +
  • Nuance +
  • Due to's +
  • Despite's +
  • Statement of AC = 
  • Good estimate!
I'm going to spend the next couple of posts breaking this down.  Let's start with what makes a good Word of Estimative Probability - a WEP.   Note:  Linguistic experts call these Verbal Probability Expressions and if you want to dive into the literature - and there's a lot - you should use this phrase to search for it.  

WEPs should first be distinguished from words of certainty.  Words of certainty, such as "will" and "won't" typically don't belong in intelligence estimates.  These words presume that the analyst has seen the future and can speak with absolute conviction about it.  Until the aliens get back with the crystal balls they promised us after Roswell, it's best if analysts avoid words of certainty in their estimates.


Notice I also said "good" WEPs, though.  A good WEP is one that effectively communicates a range of probabilities and a bad WEP is one that doesn't.  Examples?  Sure!  Bad WEPs are easy to spot:  "Possibly", "could", and "might" are all bad WEPs.  They communicate ranges of probability so broad that they are useless in decisionmaking.  They usually only serve to add uncertainty rather than reduce it in the minds of decisionmakers.  You can test this yourself.  Construct an estimate using "possible" such as "It is possible that Turkey will invade Syria this year."  Then ask people to rank the likelihood of this statement on a scale of 1-100.  Ask enough people and you will get everything from 1 TO 100.  This is a bad WEP.


Good WEPs are generally interpreted by listeners to refer to a bounded range of probabilities.  Take the WEP "remote" for example.  If I said "There is a remote chance that Turkey will invade Syria this year" we might argue if that means there is a 5% chance or a 10% chance but no one would argue that this means that there is a 90% chance of such an invasion.



The Kesselman List
Can we kick this whole WEP thing up a notch?  Yes, we can.  It turns out that there are not only "good" WEPs but there are "best" WEPs.  That is, there are some good WEPs that communicate ranges of probabilities better than others.  Here at Mercyhurst, we use the Kesselman List (see above).  Alumna Rachel Kesselman wrote her thesis on this topic a million years ago (approx.).  She read all of the literature then available and came up with a list of words, based on that literature, that were most well defined (i.e. had the tightest range of probabilities).  The US National Security Community has its own list but we like Rachel's better.  I have written about this elsewhere and you can even read Rachel's thesis and judge for yourself.  We think the Kesselman List has better evidence to support it.  That's why we use it.  We're just that way.

Before I finish, let me say a word about numbers.  It is entirely reasonable and, in fact, may well be preferable, to use numbers to communicate a range of probabilities rather than words.  In some respects this is just another way to make pizza, particularly when compared to using a list where words are explicitly tied to a numerical range of probabilities.  Why then, do I consider it the current best practice to use words?  There are four reasons:

  • Tradition.  This is the way the US National Security Community does it.  While we don't ignore theory, the Mercyhurst program is an applied program.  It seems to make sense, then, to start here but to teach the alternatives as well.  That is what we do.  
  • Anchoring bias.  Numbers have a powerful place in our minds.  As soon as you start linking notoriously squishy intelligence estimates to numbers you run the risk of triggering this bias.  Of course, using notoriously squishy words (like "possible") runs the risk of no one really knowing what you mean.  Again, a rational middle ground seems to lie in a structured list of words clearly associated with numerical ranges.
  • Cost of increasing accuracy vs the benefit of increasing accuracy.  How long would you be willing to listen to two smart analysts argue over whether something had an 81% or an 83% chance of happening?  Imagine that the issue under discussion is really important to you.  How long?  What if it were 79% vs 83%?  57% vs 83%?  35% vs 83%?  It probably depends on what "really important" means to you and how much time you have.  The truth is, though, that wringing that last little bit of uncertainty out of an issue is what typically costs the most and it is entirely possible that the cost of doing so vastly exceeds the potential benefit.  This is particularly true in intelligence questions where the margin of error is likely large and, to the extent that the answers depend on the intentions of the actors,  fundamentally irreducible.  
  • Buy-in.  Using words, even well defined words, is what is known as a "coarse grading" system.  We are surrounded with these systems.  Our traditional, A, B, C, D, F grading system used by most US schools is a coarse grading system as is our use of pass/fail on things like the driver's license test.  I have just begun to dig into the literature on coarse grading but one of the more interesting things I have found is that it seems to encourage buy-in.  We may not be able to agree on whether it is 81% or 83% as in the previous example, but we can both agree it is "highly likely" and move on.  This seems particularly important in the context of intelligence as a decision-support activity where the entire team (not just the analysts) have to take some form of action based on the estimate.  
I'll talk about the rest of the "formula" later in the summer!

Monday, July 2, 2018

3 Things You Must Know Before You Discuss Intelligence Requirements With A Decisionmaker

One of the most important tasks of virtually all intelligence professionals is the ability to sit down with their organization's decisionmakers and get meaningful intelligence requirements from them.  Getting requirements that are too vague or poorly designed make the intelligence professional's life more difficult.  More importantly, bad requirements often lead to analysis that fails to meet the decisionmaker's real needs and can, in turn, lead to the organization's failure.

All that makes perfect sense, right?  Getting a good answer to a question implies that the question is clear, that I understand the question and that I have the ability to answer it.  If I ask you, "What time is the movie?" then you are well within your rights to ask me, "Which movie?"  Good requirements emerge from a conversation; they aren't dictated through a megaphone.


Outline of this post (Trying something new here.  Let me know in the comments if you like it!)

Having this kind of requirements discussion is much more difficult in the context of intelligence, however, and not only because the questions are usually much more complicated.  There are a number of reasons for these challenges:
  • Chain of command.  Typically, intel officers work for the decisionmaker.  Even with the best of DM's, there is often a real reticence to poke at the requirement, to make suggestions about how to make it better or to question whether it is worth addressing at all.  While it is true that pushing the DM for clarity on his/her requirements statements is "just part of the job", it does not make the situation any less challenging. 
  • Lack of understanding about intel.  Most decisionmakers rise up through operational channels.  This means that decisionmakers are usually much more comfortable with operational questions (I.E. What are we going to do with the resources under our control?) than with intelligence questions (I.E. What is happening that is critical to our success or failure but outside of our control?).  Even in the national security realm, where the intelligence function is typically much better understood than in law enforcement or corporations, there is often a lack of understanding or even a misunderstanding of how intelligence supports the organization's decisionmaking process.  
  • Ops/Intel Conflation.  While there are good reasons to keep many operational discussions and intelligence discussions separate, that is not the way the decisionmaker is likely to think.  Responsible for integrating intelligence analysis with operational capabilities and constraints, decisionmakers are likely to conflate the two as they talk about requirements.  It is up to intelligence professionals to untangle them in such a way that they have a clear statement of their requirements. 
  • Lack of decisionmaker clarity.  Decisionmakers don't know what they don't know and good decisionmakers worry about that - a lot.  Even when decisionmakers fully understand intel, it is possible for them to have only a vague notion of what they want or need.  Particularly with strategic-level concerns, good DMs will be constantly asking themselves, "What questions should I be asking right now?" and worrying about wasting time and energy chasing an irrelevant question down a rabbit hole.
With this as background there are three essential questions that intelligence professionals should ask and answer before they begin a discussion about requirements:

  1. What does the organization do?  At first glance this seems ridiculous.  How could you work for an organization and not know what it does?  You'd be surprised.  Even small organizations often appear to do one thing but actually spend much of their time or make most of their money doing something entirely different.  When I was younger, for example, I worked for a company called Hargrove's Office Supplies.  You would be excused for thinking that we made our money selling office supplies.  In fact, Hargrove's made most of its money in those days selling and servicing business machines - a very different kind of business.  This problem becomes much more acute in large organizations with many moving parts.  It is worth the intelligence professional's time to get to know the organization it is supporting in some detail - everything from strategic plans to tactical practices.  While the intelligence professional will never be as knowledgeable as the operators running the organization, the more intel professionals knows about the goals and purposes of an organization, the more productive the requirements process will be.
  2. What is the current strategy and situation of the organization?  If the first question is what does the organization do, then the second question should be "How does it do it?"  All organizations have a strategy (even if it is only an implicit one) and it is worth it to take time to consider what that strategy might be.  It is also worth thinking about the current situation in which the organization finds itself.  Is the organization winning or losing?  Successful and growing or failing and losing ground against its competitors? While the situation of the organization should not matter in terms of the analysis - it is what it is - understanding how an organization is doing helps understand where a requirement is coming from and gives insight into how to focus the answer.
  3. Who is the decisionmaker?  This is another simple question with a complicated answer.  It is tempting to believe that the person or organization asking the question is the one who wants the answer.  That is not always the case.  Oftentimes, the real decisionmaker is one or more levels removed from the person asking the question of the intelligence unit.  In this case, it makes sense for the intelligence professionals to ask themselves what the the real decisionmaker wants.  In the accelerating pace of the intel world, it is entirely possible that the requirement has gone through an elaborate version of the kid's game Telephone and now bears no relationship to what the real decisionmaker wants.  Even if it does, it is still worth thinking about the kind of answer that will meet the needs of not only the gatekeeper but also of the decisionmaker behind the gate.  Finally, even if there is no gatekeeper, it is worth thinking about others who might not have asked the question but will be able to see the answer.  Almost nothing get done in a vacuum.  Even the most siloed of programs often have multiple members with different intelligence needs.  It is important, therefore, to consider who these second and third level audiences might be before crafting the requirement in order to provide clarity and prevent confusion and mission creep.
All this advice is great for when intel professionals have the luxury of actually meeting with the decisionmakers they support.  How do you deal with a situation that is entirely virtual or managed through an automated requirements management system like COLISEUM?  Don't worry, we will get to all of that later in the summer!

Thursday, June 21, 2018

What Do You Want In A Cyber Self Defense Course?

Your company, agency, whatever has hired an intern from the Mercyhurst intel program that has just completed their freshman year.  What do you want them to know about cyber?


That is one of the questions I will be wrestling with this summer.  I am teaching a new course in the fall called "Cyber Self Defense".  Nobody told me I had to teach this course.  Nope!  I volunteered (!) to teach this course.

You see, we have consistently noted that many of our first year students come to us with a pretty poor understanding of cyber related risks and how to minimize them.  The intent of this course is not to turn them all into white hat hackers.  All I really hope to do in the time I have is to make them into knowledgeable users.   
Its like the old joke about the two guys and the bear.  The first guys says, We will never outrun that bear!"  And the second guy goes, "I don't have to outrun the bear.  I just have to outrun you!"  I want to create users that can, at least, outrun the other guy.
We wanted to teach this class at the Freshman level because that is where we think it would be most useful.  It gives the students 3 more years to increase or at least use these skills and an educated user base will only help our own network become more secure.  If this first class goes well, I think I would recommend that it become a requirement for all intel students.

As the obvious wonderfulness of this offering became increasingly apparent, the question naturally arose, "Who will teach this magical, extraordinary course?"  Those of you of a certain age will remember the old Life cereal commercial lovingly preserved by YouTube (above).  Suffice it to say, I get to play the role of "Mikey" in the 2018 remake...

So I throw it out to you, Gentle Readers, what skills would you expect, what abilities would you want to see in that 18 year old intern you just hired for the summer?  I am looking for tools, tips, tricks, websites, sources, absolutely-must-cover topics, don't-waste-your-time topics and everything in between.  Free software and resources will be most appreciated but making students pay to get something that gives a big bang for the buck is also OK.

Here are a few details about the class to help you think through the problem.  It is a MWF class and each class lasts 50 minutes for 15 weeks.  I have access to a computer lab but I think I want the class to mostly be about their own devices - specifically cell phones and laptops (which virtually all students have).  We don't have a standard when it comes to these devices so we will likely have a mix of Apple and Windows, Android and IOS (With Windows and Android machines likely being in the majority).

Here are my initial thoughts:
  • First couple of weeks:  Focus on cleaning up and maintaining their own devices.  My assumption is that at least some of these students will come in with malware or viruses on their system already. Almost all will come in with some sort of factory installed bloatware and I doubt if any of their browser caches have ever been emptied.  The goal here would be to clean all of this up and to teach them how to maintain their devices
  • Next couple of weeks.  Focus on likely attack profiles and how to deal with situations where some sort of hack is more likely (e.g. coffee shops and airports).  Things like phishing and social engineering would get covered here.
  • Mid course.  Focus on privacy.  Talk about how info on the web gets passed around and used.  Talk about how to protect yourself from oversharing and what to do if you do get hacked.
  • Next couple of weeks.  Focus on advanced topics (e.g. Proxy servers, VPNs, Linux, etc).  Should they build their own computer?  
  • Final couple of weeks.  Talk about how to diagnose/help others with problems.  One of the most powerful tests of learning is seeing if the student can transfer their knowledge to new situations.  I want this kind of thing to be part of the final exam somehow.
I want this to be a project based course that gives students lots of hands on with their own devices but also gives them enough conceptual knowledge to be able to integrate new stuff as it comes along. 

I have a bunch of other half formed thoughts but I welcome your input and feedback first.  You can either drop it in the comments below (or in any of the social media where this will be posted) or you can just send me a note at kwheaton at mercyhurst dot edu.

Many thanks, hive mind!  Many thanks!