Tuesday, March 19, 2019

What's The Relationship Of An Organization's Goals And Resources To The Type Of Intelligence It Needs?

"Don't blame me, blame this!"
I was trying to find some space on the whiteboard in my office and it occurred to me that I really needed to do something with some of these thoughts.

One of the most interesting (to me, at least) had to do with the relationship between an organization's goals and its resources coupled with the notion of tactical, operational and strategic intelligence.

There is probably not an entry level course in intelligence anywhere in the world that does not cover the idea of tactical, operational and strategic intelligence.  Diane Chido and I have argued elsewhere that these three categories should be defined by the resources that an organization risks when making a decision associated with the intel.  In other words, decisions that risk few of an organization's resources are tactical while those that risk many of the organizations's resources are strategic.  Thus, within this context, the nature of the intelligence support should reflect the nature of the decision and the defining characteristic of the decision is the amount of the organization's resources potentially at risk.   

That all seemed well and good, but it seemed to me to be missing something.  Finally (Diane and I wrote our article in 2007, so you can draw your own conclusions...), it hit me!  The model needed to also take into consideration the potential impact on the goals and purposes of the organization.

Here's the handy chart that (hopefully) explains what I mean:


What I realized is that the model that Diane and I had proposed had an assumption embedded in it.  In short, we were assuming that the decisionmaker would understand the relationship between their eventual decision, the resources of the organization, and the impact the decision would have on the organization's goals.  

While there are good reasons to make this assumption (decisionmakers are supposed to make these kinds of calculations, not intel), it is clearly not always the case.  Furthermore, adding this extra bit of nuance to the model makes it more complete.

Let's take a look at some examples.  If the impact on resources of deciding to pursue a particular course of action is low but the pay-off is high, that's a no-brainer (Example:  You don't need the DIRNSA to tell you to have a hard-to-crack password).  Of course you are going to try it!  Even if you fail, it will have cost you little.  Likewise, if the impact on resources is high and the impact on goals is low, then doing whatever it is you are about to do is likely stupid (Example:  Pretty much the whole damn Franklin-Nashville Campaign).

While many of these elements may only be obvious after the fact, to the extent that these kinds of things are observable before the decision is made, reflecting on them may well help both intelligence professionals and decisionmakers understand what is needed of them when confronted by a particular problem.  

Tuesday, February 12, 2019

How To Write A Mindnumbingly Dogmatic (But Surprisingly Effective) Estimate (All 3 Parts)

At the top end of the analytic art sits the estimate.  While it is often useful to describe, explain, classify or even discuss a topic, what, as Sun Tzu would say, "enables the wise sovereign and the good general to strike and conquer, and achieve things beyond the reach of ordinary men, is foreknowledge."  Knowing what is likely (or unlikely) to happen is much more useful when creating a plan than only knowing what is happening.

Estimates are like pizza, though.  There are many different ways to make them and many of those ways are good.  However, with our young analysts, just starting out in the Mercyhurst program, we try to teach them one good, solid, never fail way to write an estimate.  You can sort of think of it as the pepperoni pizza of estimates.

Here's the formula:

  • Good WEP +
  • Nuance +
  • Due to's +
  • Despite's +
  • Statement of AC = 
  • Good estimate!
I'm going to spend the rest of this article breaking this down.  

Outline of this article (Click on link to see full map)

Good (Best!) WEPs

Let's start with what makes a good Word of Estimative Probability - a WEP.   Note:  Linguistic experts call these Verbal Probability Expressions and if you want to dive into the literature - and there's a lot - you should use this phrase to search for it.  

WEPs should first be distinguished from words of certainty.  Words of certainty, such as "will" and "won't" typically don't belong in intelligence estimates.  These words presume that the analyst has seen the future and can speak with absolute conviction about it.  Until the aliens get back with the crystal balls they promised us after Roswell, it's best if analysts avoid words of certainty in their estimates.


Notice I also said "good" WEPs, though.  A good WEP is one that effectively communicates a range of probabilities and a bad WEP is one that doesn't.  Examples?  Sure!  Bad WEPs are easy to spot:  "Possibly", "could", and "might" are all bad WEPs.  They communicate ranges of probability so broad that they are useless in decisionmaking.  They usually only serve to add uncertainty rather than reduce it in the minds of decisionmakers.  You can test this yourself.  Construct an estimate using "possible" such as "It is possible that Turkey will invade Iraq this year."  Then ask people to rank the likelihood of this statement on a scale of 1-100.  Ask enough people and you will get everything from 1 TO 100.  This is a bad WEP.


Good WEPs are generally interpreted by listeners to refer to a bounded range of probabilities.  Take the WEP "remote" for example.  If I said "There is a remote chance that Turkey will invade Iraq this year" we might argue if that means there is a 5% chance or a 10% chance but no one would argue that this means that there is a 90% chance of such an invasion.


The Kesselman List
Can we kick this whole WEP thing up a notch?  Yes, we can.  It turns out that there are not only "good" WEPs but there are "best" WEPs.  That is, there are some good WEPs that communicate ranges of probabilities better than others.  Here at Mercyhurst, we use the Kesselman List (see above).  Alumna Rachel Kesselman wrote her thesis on this topic a million years ago (approx.).  She read all of the literature then available and came up with a list of words, based on that literature, that were most well defined (i.e. had the tightest range of probabilities).  The US National Security Community has its own list but we like Rachel's better.  I have written about this elsewhere and you can even read Rachel's thesis and judge for yourself.  We think the Kesselman List has better evidence to support it.  That's why we use it.  We're just that way.

Before I finish, let me say a word about numbers.  It is entirely reasonable and, in fact, may well be preferable, to use numbers to communicate a range of probabilities rather than words.  In some respects this is just another way to make pizza, particularly when compared to using a list where words are explicitly tied to a numerical range of probabilities.  Why then, do I consider it the current best practice to use words?  There are four reasons:

  • Tradition.  This is the way the US National Security Community does it.  While we don't ignore theory, the Mercyhurst program is an applied program.  It seems to make sense, then, to start here but to teach the alternatives as well.  That is what we do.  
  • Anchoring bias.  Numbers have a powerful place in our minds.  As soon as you start linking notoriously squishy intelligence estimates to numbers you run the risk of triggering this bias.  Of course, using notoriously squishy words (like "possible") runs the risk of no one really knowing what you mean.  Again, a rational middle ground seems to lie in a structured list of words clearly associated with numerical ranges.
  • Cost of increasing accuracy vs the benefit of increasing accuracy.  How long would you be willing to listen to two smart analysts argue over whether something had an 81% or an 83% chance of happening?  Imagine that the issue under discussion is really important to you.  How long?  What if it were 79% vs 83%?  57% vs 83%?  35% vs 83%?  It probably depends on what "really important" means to you and how much time you have.  The truth is, though, that wringing that last little bit of uncertainty out of an issue is what typically costs the most and it is entirely possible that the cost of doing so vastly exceeds the potential benefit.  This is particularly true in intelligence questions where the margin of error is likely large and, to the extent that the answers depend on the intentions of the actors,  fundamentally irreducible.  
  • Buy-in.  Using words, even well defined words, is what is known as a "coarse grading" system.  We are surrounded with these systems.  Our traditional, A, B, C, D, F grading system used by most US schools is a coarse grading system as is our use of pass/fail on things like the driver's license test.  I have just begun to dig into the literature on coarse grading but one of the more interesting things I have found is that it seems to encourage buy-in.  We may not be able to agree on whether it is 81% or 83% as in the previous example, but we can both agree it is "highly likely" and move on.  This seems particularly important in the context of intelligence as a decision-support activity where the entire team (not just the analysts) have to take some form of action based on the estimate.  
Nuance

WEPs are important but they clearly aren't the only thing.  What adds value to an estimate is its level of nuance.

Let me give you an example of what I mean:  
  • The GDP of Yougaria is likely to grow.
  • The GDP of Yougaria is likely to grow by 3-4% over the next 12 months.
Both of these are estimates and both of these use good WEPs but one is obviously better than the other.  Why?  Nuance.

Mercyhurst Alum Mike Lyden made a stab at defining what we mean by "nuance" in his 2007 thesis, The Efficacy of Accelerated Analysis in Strategic Level Intelligence Estimates.  There he defined it as how many of the basic journalistic questions (Who, What, When, Why, Where, and How) the estimate addressed.  

For example, Mike would likely give the first estimate above a nuance score of 1.  It really only answers the "What" question.  I think he would give the second estimate a 3 as it appears to answer not only the "What" question but also the "When" and "How (or how much)" questions as well.  Its not a perfect system but it makes the point.

In general, I think it is obvious that more nuance is better than less.  A more nuanced estimate is more likely to be useful and it is less likely to be misinterpreted.  There are some issues that crop up and need to be addressed, however - nuances to the nuance rule, if you will.
  • What if I don't have the evidence to support a more nuanced estimate?  Look at the second estimate above.  What if you had information to support a growing economy but not enough information (or too much uncertainty in the information you did have) to make an estimate regarding the size and time frame for that growth?  I get it.  You wouldn't feel comfortable putting numbers and dates to this growth.  What would you feel comfortable with?  Would you be more comfortable with an adverb ("grow moderately")?  Would you be more comfortable with a date range ("over the next 6 to 18 months")?  Is there a way to add more nuance in any form with which you can still be comfortable as an analyst?  The cardinal rule here is to not add anything that you can't support with facts and analysis - that you are not willing to personally stand behind.  If, in the end, all you are comfortable with is "The economy is likely to grow" then say that.  I think, however, if you ponder it for a while, you may be able to come up with another formulation that addresses the decisionmaker's need for nuance and your need to be comfortable with your analysis.
  • What if the requirement does not demand a nuanced estimate?  What if all the decisionmaker needed to know was whether the economy of Yougaria was likely to grow?  He/She doesn't need to know any more to make his/her decision.  In fact, spending time and effort to add nuance would actually be counterproductive.  In this case, there is no need to add nuance.  Answer the question and move on.  That said, my experience suggests that this condition is rather more rare than not.  Even when DMs say they just need a "simple" answer, they often actually needs something, well, more nuanced.  Whether this is the case or not is something that should be worked out in the requirements process.  
  • What if all this nuance makes my estimate sound clunky?  So, yeah.  An estimate with six clauses in it is going to be technically accurate and very nuanced but sound as clunky and awkward as a sentence can sound.  Well-written estimates fall at the intersection of good estimative practice and good grammar.  You can't sacrifice either, which is why they can be very hard to craft.  The solution is, of course, to either refine your single estimative sentence or to break up the estimative sentence into several sentences.  In my next post on this, where I will talk about "due to's and "despite's", I will give you a little analytic sleight of hand that can help you with this problem.

Due to's And Despite's

Consider again the estimate from above:  "The GDP of Yougaria is likely to grow 3-4% over the next 12 months."  Why?  Why do you, the analyst, think this is the case?  

Typically, there are a few key facts and some accompanying logic that are acting as drivers of these kinds of estimates.  It may have something to do with trade, for example, or with new economic opportunities opening up in the country.  It may be more about where the country is in the business cycle than anything else.  For whatever reason, these are the critical facts and logic that underpin your entire estimate.  If you are wrong about these drivers, because of incomplete collection, poor analysis or deliberate deception, your estimate is likely wrong as well.

I call these factors "due to's" because you can easily see them as a "due to" clause added to the estimate:  
"Due to a substantial increase in trade and the discovery of significant oil reserves in the northern part of the country, the GDP of Yougaria will likely increase 3-4% over the next 12 months."
If "due to's" are driving your faith in your estimate, "despite" clauses are the ones undermining it.  In any non-trivial exercise in estimation there are likely many facts which undermine your estimate.  In the example above, yes, there was an uptick in trade and the oil reserves are great but what about the slight increase in unemployment last month?  Or the reduction in consumer confidence?  

Much more than mere procatalepsis (gosh, I love that word...), the true intent behind the "despite" clause is to be intellectually honest with the decisionmaker you are supporting as an intelligence professional.  In short, you are saying two things to that DM.  First, "I recognize that not all of the facts available support my estimate"  and, second, "despite this, I still believe my estimate is accurate."  

How might that play itself out in our example?  
Despite recent increases in unemployment, the GDP of Yougaria is likely to grow 3-4% over the next 12 months.  Increases in trade have been strong and the recently discovered oil reserves in the northern part of the country will likely drive significant growth over the next year.
Or
Due to a substantial increase in trade and the discovery of significant oil reserves in the northern part of the country, the GDP of Yougaria will likely increase 3-4% over the next 12 months.  While unemployment recently ticked upward, this is likely due to seasonal factors and is only temporary.
These are just examples, of course, and the actual formulation depends on the facts at hand.  The goal remains the same in all cases - here's what I think, here's why, here's why not and here's why the "why nots" don't matter. 

Analytic Confidence

If the estimate is what the analyst thinks is likely or unlikely to happen then analytic confidence can most easily be thought of as the odds that the analyst is wrong.  Imagine two analysts in two different parts of the world have been asked to assess Yougaria's economy for the next year.  One is a beginner with no real experience or contacts in Yougaria.  His sources are weak and he is under considerable time pressure to produce.  The other analyst, operating wholly independently of the first, is a trained economist with many years experience with Yougaria.  His sources are excellent and he has a proven track record of estimating Yougaria's economic performance.  

Now, imagine that both of them just so happen to come to the exact same estimative conclusion - Yougaria's GDP is likely to grow 3-4% over the next 12 months.  Both report their estimative conclusions to their respective decisionmakers.  

It is not too difficult to see that the decisionmaker of the first analyst might be justifiably hesitant to commit significant resources based on this estimate of Yougaria's economic performance.  Absent additional analysis, it is quite obvious that there are a number of good reasons why the analyst in this case might be wrong.  

The decisionmaker supported by the second analyst is in exactly the opposite position.  Here there are very good reasons to trust the analyst's estimate and to commit to courses of action that are premised on its accuracy.  In the first case we could say that the analytic confidence is low while in the second case we could say it is high.  

What are the factors that suggest whether an analyst is more likely to be right or wrong?  Some of the earliest research on this was done by a Mercyhurst alum, Josh Peterson.  In his thesis he went out and looked for research based reasons why a particular analyst is more likely to be right or wrong.  He managed to identify seven reasons:
  • How good are your sources?
  • How well do your independent sources corroborate each other?
  • Are you a subject matter expert?  (This is less important than you might think, however.)
  • Did you collaborate with other analysts and exactly how did you do that? (Some methods are counterproductive.) 
  • Did you structure your thinking in a way proven to improve your forecasting accuracy? (A number of commonly taught techniques don't work particularly well BTW.)
  • How complex did you perceive the task to be?
  • How much time pressure were you under?
Josh would be the first person to tell you the flaws in his research.  For a start, he doesn't know if this list is complete nor does he know how much weight each factor should receive.  In general, then, there is a lot more research to be done on the concept of analytic confidence.  That said, we do know some things and it would be intellectually dishonest not to give decisionmakers some sense of our level of confidence when we make our estimates.

What does this look like in practice?  Well, I tend to think the best we can do right now is to divide the concept of confidence into three levels.  Humans are usually pretty good at intuitively spotting the very best or the very worst but not so good with rank ordering things in the middle.  I teach students that this means that the most common assessment of analytic confidence is likely moderate with high and low reserved for those situations where the seven factors are either largely present or largely absent.  

What then would our Yougarian estimate look like with analytic confidence added to the mix?
Due to a substantial increase in trade and the discovery of significant oil reserves in the northern part of the country, the GDP of Yougaria will likely increase 3-4% over the next 12 months.  While unemployment recently ticked upward, this is likely due to seasonal factors and is only temporary. 
Analytic confidence in this estimate is moderate.  The analyst had adequate time and the task was not particularly complex.  However, the reliability of the sources available on this topic was average with no high quality sources available for the estimate.  The sources available did tend to corroborate each other however, and analyst collaboration was very strong.
Final Thoughts

This is not not the only way to write an effective estimate.  There are other formulations that likely offer equal or even greater clarity.  There is clearly a need for additional research in virtually all of the elements outlined here.  There is also room for more creative solutions that convey the degree of  uncertainty with more precision, encourage analyst buy-in, and communicate all of that more effectively to the decisionmakers supported.

The overly dogmatic "formula" discussed here is, however, a place to start.   Particularly useful with entry-level analysts who may be unused to the rigor necessary in intelligence analysis, this approach helps them create "good enough" analysis in a relatively short time while providing a sound basis for more advanced formulations.

Monday, February 11, 2019

How To Write A Mindnumbingly Dogmatic (But Surprisingly Effective) Estimate (Part 3 - "Due to's", "Despite's" and Analytic Confidence)

In the first post on this topic, I introduced the idea of a "formula" for a pretty good estimate.  I also talked about using good words of estimative probability.  In the second post, I talked about nuance and what I mean by that term.  In this post, the last post of the series, I want to talk about the last three elements of this approach to writing a good estimate, "due to's", "despite's" and analytic confidence.

Outline of this part of the series (Click link to see full version)
Due to's And Despite's

Consider again the estimate from Part 2 of this series:  "The GDP of Yougaria is likely to grow 3-4% over the next 12 months."  Why?  Why do you, the analyst, think this is the case?  

Typically, there are a few key facts and some accompanying logic that are acting as drivers of these kinds of estimates.  It may have something to do with trade, for example, or with new economic opportunities opening up in the country.  It may be more about where the country is in the business cycle than anything else.  For whatever reason, these are the critical facts and logic that underpin your entire estimate.  If you are wrong about these drivers, because of incomplete collection, poor analysis or deliberate deception, your estimate is likely wrong as well.

I call these factors "due to's" because you can easily see them as a "due to" clause added to the estimate:  
"Due to a substantial increase in trade and the discovery of significant oil reserves in the northern part of the country, the GDP of Yougaria will likely increase 3-4% over the next 12 months."
If "due to's" are driving your faith in your estimate, "despite" clauses are the ones undermining it.  In any non-trivial exercise in estimation there are likely many facts which undermine your estimate.  In the example above, yes, there was an uptick in trade and the oil reserves are great but what about the slight increase in unemployment last month?  Or the reduction in consumer confidence?  

Much more than mere procatalepsis (gosh, I love that word...), the true intent behind the "despite" clause is to be intellectually honest with the decisionmaker you are supporting as an intelligence professional.  In short, you are saying two things to that DM.  First, "I recognize that not all of the facts available support my estimate"  and, second, "despite this, I still believe my estimate is accurate."  

How might that play itself out in our example?  
Despite recent increases in unemployment, the GDP of Yougaria is likely to grow 3-4% over the next 12 months.  Increases in trade have been strong and the recently discovered oil reserves in the northern part of the country will likely drive significant growth over the next year.
Or
Due to a substantial increase in trade and the discovery of significant oil reserves in the northern part of the country, the GDP of Yougaria will likely increase 3-4% over the next 12 months.  While unemployment recently ticked upward, this is likely due to seasonal factors and is only temporary.
These are just examples, of course, and the actual formulation depends on the facts at hand.  The goal remains the same in all cases - here's what I think, here's why, here's why not and here's why the "why nots" don't matter. 

Analytic Confidence

If the estimate is what the analyst thinks is likely or unlikely to happen then analytic confidence can most easily be thought of as the odds that the analyst is wrong.  Imagine two analysts in two different parts of the world have been asked to assess Yougaria's economy for the next year.  One is a beginner with no real experience or contacts in Yougaria.  His sources are weak and he is under considerable time pressure to produce.  The other analyst, operating wholly independently of the first, is a trained economist with many years experience with Yougaria.  His sources are excellent and he has a proven track record of estimating Yougaria's economic performance.  

Now, imagine that both of them just so happen to come to the exact same estimative conclusion - Yougaria's GDP is likely to grow 3-4% over the next 12 months.  Both report their estimative conclusions to their respective decisionmakers.  

It is not too difficult to see that the decisionmaker of the first analyst might be justifiably hesitant to commit significant resources based on this estimate of Yougaria's economic performance.  Absent additional analysis, it is quite obvious that there are a number of good reasons why the analyst in this case might be wrong.  

The decisionmaker supported by the second analyst is in exactly the opposite position.  Here there are very good reasons to trust the analyst's estimate and to commit to courses of action that are premised on its accuracy.  In the first case we could say that the analytic confidence is low while in the second case we could say it is high.  

What are the factors that suggest whether an analyst is more likely to be right or wrong?  Some of the earliest research on this was done by a Mercyhurst alum, Josh Peterson.  In his thesis he went out and looked for research based reasons why a particular analyst is more likely to be right or wrong.  He managed to identify seven reasons:
  • How good are your sources?
  • How well do your independent sources corroborate each other?
  • Are you a subject matter expert?  (This is less important than you might think, however.)
  • Did you collaborate with other analysts and exactly how did you do that? (Some methods are counterproductive.) 
  • Did you structure your thinking in a way proven to improve your forecasting accuracy? (A number of commonly taught techniques don't work particularly well BTW.)
  • How complex did you perceive the task to be?
  • How much time pressure were you under?
Josh would be the first person to tell you the flaws in his research.  For a start, he doesn't know if this list is complete nor does he know how much weight each factor should receive.  In general, then, there is a lot more research to be done on the concept of analytic confidence.  That said, we do know some things and it would be intellectually dishonest not to give decisionmakers some sense of our level of confidence when we make our estimates.

What does this look like in practice?  Well, I tend to think the best we can do right now is to divide the concept of confidence into three levels.  Humans are usually pretty good at intuitively spotting the very best or the very worst but not so good with rank ordering things in the middle.  I teach students that this means that the most common assessment of analytic confidence is likely moderate with high and low reserved for those situations where the seven factors are either largely present or largely absent.  

What then would our Yougarian estimate look like with analytic confidence added to the mix?
Due to a substantial increase in trade and the discovery of significant oil reserves in the northern part of the country, the GDP of Yougaria will likely increase 3-4% over the next 12 months.  While unemployment recently ticked upward, this is likely due to seasonal factors and is only temporary. 
Analytic confidence in this estimate is moderate.  The analyst had adequate time and the task was not particularly complex.  However, the reliability of the sources available on this topic was average with no high quality sources available for the estimate.  The sources available did tend to corroborate each other however, and analyst collaboration was very strong.
Final Thoughts

This is not not the only way to write an effective estimate.  There are other formulations that likely offer equal or even greater clarity.  There is clearly a need for additional research in virtually all of the elements outlined here.  There is also room for more creative solutions that convey the degree of  uncertainty with more precision, encourage analyst buy-in, and communicate all of that more effectively to the decisionmakers supported.

The overly dogmatic "formula" discussed here is, however, a place to start.   Particularly useful with entry-level analysts who may be unused to the rigor necessary in intelligence analysis, this approach helps them create "good enough" analysis in a relatively short time while providing a sound basis for more advanced formulations.

Thursday, January 24, 2019

I could use your input...

I could use your (everyone's!) input. 

My protagonist imagined by the very fine artist,
Kris Brannock


I just finished a novel I have been working on (and off and on and off) for the last two years. It has almost nothing to do with the kind of stuff I normally work on.

You see, it is an adventure story about a young, street cat who joins the famous cat colony at the Hermitage Museum in St Petersburg, Russia.

First, no, I am not kidding.

Second, I don't know why I did it either.

The Hermitage Museum really does exist (of course) and there really is a cat colony that has been in residence pretty much non-stop since the Empress Elizabeth(!) brought cats to the palace in 1747 to control the rats and mice. It just seemed like a cool setting for a story that I wanted to tell.

The book is targeted at what publishers call "middle grade readers". That is somewhere between a precocious 8 and 13+, I guess. Anyone who likes adventure and likes cats would probably be intrigued, though. I have come up with two ideas for a title and the link below takes you to a one question survey that asks you to vote on which you like the best.

I'd appreciate your input! 

Take the Survey!

Monday, October 22, 2018

6 Things To Think About While Discussing Requirements With A Decisionmaker (All 6 Parts)

An intel professional successfully gets everything he needs from a
DM in a requirements briefing.  Guess which one is the unicorn...
How can I use the limited amount of time my decisionmakers have to discuss their intelligence requirements to get the maximum return on that investment?  Earlier this summer, I began a series on this precise theme.

I have already written about how to prepare for an intelligence requirements meeting and about how to deal with a virtual intelligence requirements environment.  In this post, I pull all of those pieces together and outline the six things I think an intelligence professional needs to consider while discussing requirements with a decisionmaker (DM).

1.  Does the DM really want intelligence?

It goes without saying that an organization's mission is going to drive its intel requirements.  Whether the goal is to launch a new product line or take the next hill, decisionmakers need intel to help them think through the problem.

Unfortunately, DMs often conflate operational concerns ("What are we going to do?" kinds of questions) with intel concerns ("What is the other guy going to do?" kinds of questions).  This is particularly true in a business environment where intelligence as a distinct function of business is a relatively new concept.

Good intelligence requirements are typically about something which is important to an organization's success or failure but which is also outside that organization's control.  Good intelligence requirements are, in short, about the "other guy" - the enemy, the competitor, the criminal - or, at least, about the external environment.

Intelligence professionals need to be able to extract intelligence requirements from the broader conversation and play them back to the DM to confirm that both parties understand what needs to be done before they go to work.


"And what kind of intelligence would the gentleman prefer today?"
2.  What kind of intelligence is the DM looking for?

There are two broad (and informal) categories of intelligence - descriptive and estimative.  Descriptive intelligence is about explaining something that is relevant to the decision at hand.  Estimative intelligence is about what that "something" is likely to do next.  It is the difference between "Who is the president of Burkina Faso now?" and "Who is the next president of Burkina Faso likely to be?"

Estimative intelligence is obviously more valuable than descriptive intelligence.  Estimative intelligence allows the DM and his or her operational staff to plan for the future, to be proactive instead of reactive.  Surprisingly, though, DMs often forget to ask for estimates regarding issues they think will be relevant to their decisions.  It is worth the intelligence professionals time, therefore, to look for places where an estimate might be useful and suggest it as an additional requirement.

While I am never one to look for more work, the truth is that descriptive intelligence is becoming easier and easier to find.  The real value in having dedicated intel staff is in that staff's ability to make estimates.  If all you do is what computers do well (IE describe) then you run the risk of being downsized or eliminated the next time there is a budget crunch.


"I challenge your assumptions, sir!"
3. What are the DM's assumptions?

There are three kinds of assumptions intelligence professionals need to watch for in their DMs when discussing requirements:
  • About the requirement
  • About the answer to the requirement
  • About the intel team
Consider this requirement:  "Will the Chinese provide the equipment for the expansion of mobile cellphone services into rural Ghana?"  The DM is clearly assuming that there is going to be an expansion of cellphone services.  That doesn't make it a bad requirement but analysts should start by checking this assumption.  

Note also that the DM did not frame the question as "Who is going to provide the equipment...".  Rather, he or she highlighted the potential role of the Chinese.  This kind of framing suggests that the DM thinks he or she already knows the answer to the requirement but just wants a "double check".  Other interpretations are possible, of course, but it is worth noting if only so the intelligence professionals working the issue don't approach the problem with blinders on.

Finally, it is also important to think about the assumptions the DM has about the team working on the requirement.  What does the DM see when he or she looks out at our team?  Are we all young and eager?  Old and grizzled?  Does our reputation - good or bad - precede us?  Finally, is the DM asking the "real" requirement or just what he or she thinks the team can handle?  Not getting at the real questions the DM needs answered is a recipe for failure or, at least, the perception of failure, which is probably worse.

4.  What does the DM mean when he/she/they say "x"?

"Hic sunt dracones!"
"I'm worried about Europe.  What moves are our competitors likely to make next?"  This is a perfectly reasonable request from a decisionmaker.  In fact, if you are in a competitive intelligence position for a larger corporation, you have likely heard something close to it.  

While reasonable, it is the kind of requirements statement that is filled with dragons for the unwary.  Not the least of these dragons is definitional.   When the DM said "competitors" did he or she mean competitors that reside in Europe or competitors that sell in Europe or both?  And what did he or she mean by "Europe"?  Continental Europe, the EU, western Europe, something else?

Listening carefully for these common words that are actually being used in very specific ways or are, in a particular organization, technical terms is a critical aspect of a successful requirements meeting.  If the intelligence professional has a long history with a particular decisionmaker then these terms of art may be common knowledge.  Even in this case, however, it is worth confirming with the DM that everyone shares this understanding of these kinds of words.

That is why I consider it best practice to memorialize the requirement in writing after the meeting and to include (usually by way of footnote) any terms defined in the meeting.  In addition, if certain terms weren't defined in the meeting but the intel professional feels the need to define them afterwards, I think it makes sense for the intel professional to make their best guess at what the DM meant but then draw specific attention to the intel professional's tentative definition of the term in question and to seek confirmation of that definition with the DM.  

This may sound like a convoluted process, but, as I tell my students, not getting the requirement right is like building a house on the wrong piece of property.  It doesn't matter how beautiful or elegant it is, if you build it on the wrong piece of property you will still have to tear it down and start all over again.  The same holds true for a misunderstood intelligence requirement.  Get the requirement wrong and it doesn't matter how good your answer is - you answered the wrong question!

5. What constraints is the DM willing to put on the requirement?
"Jeeves, I am fairly certain that is not
what Prof. Wheaton had in mind
when he said we need to constrain the requirement."

I once had a DM who was looking to expand his local business and asked for a nationwide study.  His business was based on serving local customers and he did not have the resources to go nationwide and yet...  

Decisionmakers are notoriously reluctant to put constraints on requirements.  They worry that, if they do, just on the other side of whatever bright line they think they have drawn, there will be a perfect customer for their business, a critical fact that lets them make a foolproof plan to defeat the enemy, or the key piece of info that solves all their problems.  I call this the "pot of gold" syndrome and it afflicts every decisionmaker.  


This worry, of course, blinds these same decisionmakers to the inevitable problem this approach causes:  Given the constant limitations on time and resources, trying to look everywhere makes it difficult for the intelligence unit to look anywhere in depth.  Knowing the areas that are of genuine interest and can genuinely support the decisionmaker helps get the most out of the intelligence effort.  Likewise, knowing where you don't need to look is equally helpful.


There are at least six different kinds of constraints that intelligence professionals need to address when engaged in a discussion about requirements:
  • Geography.  What are the limits to how far we need to look?  Where can we draw lines on our maps?  Geography is used loosely here, by the way.  Understanding and constraining the "market landscape" or the "cyber landscape", for example, also fall within this guidance.
  • Time.  How far forward do you want us to look?  Every problem has a "predictive horizon" beyond which it is hard to see.  Moreover, you will likely see a good bit more detail with a good bit more confidence if you are looking one month out instead of 10 years out.
  • Organizational units.  At what level does the DM want the analysis?  Am I looking at industries, companies or departments within companies?  Countries, regions, or continents?  
  • Processes, functions.  Are there certain processes or functions of the target that the DM cares more about than others?  Are there processes or functions that we could ignore?  For example, imagine a company that doesn't care how its competitor manages its HR but really wants to know about its supply chain.
  • People.  Which people in the target organization are most important to the DM (if any)?  Are we looking at the government of a country or the president of a country?  A competitor or the CEO of that competitor?  Obviously, "both!" might be right answer but asking the question makes it clear to both the DM and the intel unit.
  • Money.  Are there amounts of money about which we do not care?  Do you want me to try to look at every drug transaction or just the large ones?  Is every act of bribery, no matter how trivial, really worth spending the time and energy on in a study of a country's level of corruption?  Again, the answer in both cases may be "yes!" but without asking, the intel unit runs the risk of failing to provide the level of analysis the DM wants and will almost inevitably waste time analyzing issues that the DM cares little about.
6. What are the DM's priorities?

In any sort of robust requirements discussion, it is normal for many more requirements to emerge than the intelligence unit can handle.   Rather than complain about all the work, a better way to handle this is to get the DM to state his/her priorities.  

I have worked with hundreds of DMs and all of them understand resource constraints.  Even with quality intel analysis, I have often seen teams disappoint a DM when they have to say, "We didn't have time/money/people to get to all of your requirements."  I have never, however, seen a DM disappointed when that team can say, "We didn't have time/money/people to get to all of your requirements, but we were able to address your top 5 (or 10 or whatever) requirements."

The key to being able to address the top priorities, however, is knowing what they are.  As with all constraints, DMs are typically hesitant to prioritize their questions.  They may feel that they do not know enough to do so.  They may also be worried that the intelligence unit will put on blinders such that they will only look at the priorities and forget to keep an eye out for unexpected threats and opportunities.  

One of the keys here is to not make assumptions about priorities.  Even if the DM sends the team a numbered list, it makes sense to go back and ask, "Are these in priority order?"  Almost every time I have asked that question - forcing the DM to actively think about their priorities - I get changes to the order.  Likewise, just because a DM talks a lot about a certain issue, do not assume that it is the top priority.  It may just be the most recent thing that has come up or a new idea that the DM just had.  Asking, "We have talked about X quite a bit.  Is this where you would like us to focus?" is still important.

Priorities are an enormously powerful tool for an intelligence unit. They allow the unit to focus and to make tough decisions about what is relevant and what is not.  Don't leave your requirements meeting without them!