Monday, October 22, 2018

6 Things To Think About While Discussing Requirements With A Decisionmaker (All 6 Parts)

An intel professional successfully gets everything he needs from a
DM in a requirements briefing.  Guess which one is the unicorn...
How can I use the limited amount of time my decisionmakers have to discuss their intelligence requirements to get the maximum return on that investment?  Earlier this summer, I began a series on this precise theme.

I have already written about how to prepare for an intelligence requirements meeting and about how to deal with a virtual intelligence requirements environment.  In this post, I pull all of those pieces together and outline the six things I think an intelligence professional needs to consider while discussing requirements with a decisionmaker (DM).

1.  Does the DM really want intelligence?

It goes without saying that an organization's mission is going to drive its intel requirements.  Whether the goal is to launch a new product line or take the next hill, decisionmakers need intel to help them think through the problem.

Unfortunately, DMs often conflate operational concerns ("What are we going to do?" kinds of questions) with intel concerns ("What is the other guy going to do?" kinds of questions).  This is particularly true in a business environment where intelligence as a distinct function of business is a relatively new concept.

Good intelligence requirements are typically about something which is important to an organization's success or failure but which is also outside that organization's control.  Good intelligence requirements are, in short, about the "other guy" - the enemy, the competitor, the criminal - or, at least, about the external environment.

Intelligence professionals need to be able to extract intelligence requirements from the broader conversation and play them back to the DM to confirm that both parties understand what needs to be done before they go to work.


"And what kind of intelligence would the gentleman prefer today?"
2.  What kind of intelligence is the DM looking for?

There are two broad (and informal) categories of intelligence - descriptive and estimative.  Descriptive intelligence is about explaining something that is relevant to the decision at hand.  Estimative intelligence is about what that "something" is likely to do next.  It is the difference between "Who is the president of Burkina Faso now?" and "Who is the next president of Burkina Faso likely to be?"

Estimative intelligence is obviously more valuable than descriptive intelligence.  Estimative intelligence allows the DM and his or her operational staff to plan for the future, to be proactive instead of reactive.  Surprisingly, though, DMs often forget to ask for estimates regarding issues they think will be relevant to their decisions.  It is worth the intelligence professionals time, therefore, to look for places where an estimate might be useful and suggest it as an additional requirement.

While I am never one to look for more work, the truth is that descriptive intelligence is becoming easier and easier to find.  The real value in having dedicated intel staff is in that staff's ability to make estimates.  If all you do is what computers do well (IE describe) then you run the risk of being downsized or eliminated the next time there is a budget crunch.


"I challenge your assumptions, sir!"
3. What are the DM's assumptions?

There are three kinds of assumptions intelligence professionals need to watch for in their DMs when discussing requirements:
  • About the requirement
  • About the answer to the requirement
  • About the intel team
Consider this requirement:  "Will the Chinese provide the equipment for the expansion of mobile cellphone services into rural Ghana?"  The DM is clearly assuming that there is going to be an expansion of cellphone services.  That doesn't make it a bad requirement but analysts should start by checking this assumption.  

Note also that the DM did not frame the question as "Who is going to provide the equipment...".  Rather, he or she highlighted the potential role of the Chinese.  This kind of framing suggests that the DM thinks he or she already knows the answer to the requirement but just wants a "double check".  Other interpretations are possible, of course, but it is worth noting if only so the intelligence professionals working the issue don't approach the problem with blinders on.

Finally, it is also important to think about the assumptions the DM has about the team working on the requirement.  What does the DM see when he or she looks out at our team?  Are we all young and eager?  Old and grizzled?  Does our reputation - good or bad - precede us?  Finally, is the DM asking the "real" requirement or just what he or she thinks the team can handle?  Not getting at the real questions the DM needs answered is a recipe for failure or, at least, the perception of failure, which is probably worse.

4.  What does the DM mean when he/she/they say "x"?

"Hic sunt dracones!"
"I'm worried about Europe.  What moves are our competitors likely to make next?"  This is a perfectly reasonable request from a decisionmaker.  In fact, if you are in a competitive intelligence position for a larger corporation, you have likely heard something close to it.  

While reasonable, it is the kind of requirements statement that is filled with dragons for the unwary.  Not the least of these dragons is definitional.   When the DM said "competitors" did he or she mean competitors that reside in Europe or competitors that sell in Europe or both?  And what did he or she mean by "Europe"?  Continental Europe, the EU, western Europe, something else?

Listening carefully for these common words that are actually being used in very specific ways or are, in a particular organization, technical terms is a critical aspect of a successful requirements meeting.  If the intelligence professional has a long history with a particular decisionmaker then these terms of art may be common knowledge.  Even in this case, however, it is worth confirming with the DM that everyone shares this understanding of these kinds of words.

That is why I consider it best practice to memorialize the requirement in writing after the meeting and to include (usually by way of footnote) any terms defined in the meeting.  In addition, if certain terms weren't defined in the meeting but the intel professional feels the need to define them afterwards, I think it makes sense for the intel professional to make their best guess at what the DM meant but then draw specific attention to the intel professional's tentative definition of the term in question and to seek confirmation of that definition with the DM.  

This may sound like a convoluted process, but, as I tell my students, not getting the requirement right is like building a house on the wrong piece of property.  It doesn't matter how beautiful or elegant it is, if you build it on the wrong piece of property you will still have to tear it down and start all over again.  The same holds true for a misunderstood intelligence requirement.  Get the requirement wrong and it doesn't matter how good your answer is - you answered the wrong question!

5. What constraints is the DM willing to put on the requirement?
"Jeeves, I am fairly certain that is not
what Prof. Wheaton had in mind
when he said we need to constrain the requirement."

I once had a DM who was looking to expand his local business and asked for a nationwide study.  His business was based on serving local customers and he did not have the resources to go nationwide and yet...  

Decisionmakers are notoriously reluctant to put constraints on requirements.  They worry that, if they do, just on the other side of whatever bright line they think they have drawn, there will be a perfect customer for their business, a critical fact that lets them make a foolproof plan to defeat the enemy, or the key piece of info that solves all their problems.  I call this the "pot of gold" syndrome and it afflicts every decisionmaker.  


This worry, of course, blinds these same decisionmakers to the inevitable problem this approach causes:  Given the constant limitations on time and resources, trying to look everywhere makes it difficult for the intelligence unit to look anywhere in depth.  Knowing the areas that are of genuine interest and can genuinely support the decisionmaker helps get the most out of the intelligence effort.  Likewise, knowing where you don't need to look is equally helpful.


There are at least six different kinds of constraints that intelligence professionals need to address when engaged in a discussion about requirements:
  • Geography.  What are the limits to how far we need to look?  Where can we draw lines on our maps?  Geography is used loosely here, by the way.  Understanding and constraining the "market landscape" or the "cyber landscape", for example, also fall within this guidance.
  • Time.  How far forward do you want us to look?  Every problem has a "predictive horizon" beyond which it is hard to see.  Moreover, you will likely see a good bit more detail with a good bit more confidence if you are looking one month out instead of 10 years out.
  • Organizational units.  At what level does the DM want the analysis?  Am I looking at industries, companies or departments within companies?  Countries, regions, or continents?  
  • Processes, functions.  Are there certain processes or functions of the target that the DM cares more about than others?  Are there processes or functions that we could ignore?  For example, imagine a company that doesn't care how its competitor manages its HR but really wants to know about its supply chain.
  • People.  Which people in the target organization are most important to the DM (if any)?  Are we looking at the government of a country or the president of a country?  A competitor or the CEO of that competitor?  Obviously, "both!" might be right answer but asking the question makes it clear to both the DM and the intel unit.
  • Money.  Are there amounts of money about which we do not care?  Do you want me to try to look at every drug transaction or just the large ones?  Is every act of bribery, no matter how trivial, really worth spending the time and energy on in a study of a country's level of corruption?  Again, the answer in both cases may be "yes!" but without asking, the intel unit runs the risk of failing to provide the level of analysis the DM wants and will almost inevitably waste time analyzing issues that the DM cares little about.
6. What are the DM's priorities?

In any sort of robust requirements discussion, it is normal for many more requirements to emerge than the intelligence unit can handle.   Rather than complain about all the work, a better way to handle this is to get the DM to state his/her priorities.  

I have worked with hundreds of DMs and all of them understand resource constraints.  Even with quality intel analysis, I have often seen teams disappoint a DM when they have to say, "We didn't have time/money/people to get to all of your requirements."  I have never, however, seen a DM disappointed when that team can say, "We didn't have time/money/people to get to all of your requirements, but we were able to address your top 5 (or 10 or whatever) requirements."

The key to being able to address the top priorities, however, is knowing what they are.  As with all constraints, DMs are typically hesitant to prioritize their questions.  They may feel that they do not know enough to do so.  They may also be worried that the intelligence unit will put on blinders such that they will only look at the priorities and forget to keep an eye out for unexpected threats and opportunities.  

One of the keys here is to not make assumptions about priorities.  Even if the DM sends the team a numbered list, it makes sense to go back and ask, "Are these in priority order?"  Almost every time I have asked that question - forcing the DM to actively think about their priorities - I get changes to the order.  Likewise, just because a DM talks a lot about a certain issue, do not assume that it is the top priority.  It may just be the most recent thing that has come up or a new idea that the DM just had.  Asking, "We have talked about X quite a bit.  Is this where you would like us to focus?" is still important.

Priorities are an enormously powerful tool for an intelligence unit. They allow the unit to focus and to make tough decisions about what is relevant and what is not.  Don't leave your requirements meeting without them!


Wednesday, October 10, 2018

6 Things To Think About While Discussing Requirements With A Decisionmaker (Part 5 and 6)

"Jeeves, I am fairly certain that is not what Prof. Wheaton had in mind
when he said we need to constrain the requirement."
How can I use the limited amount of time my decisionmakers have to discuss their intelligence requirements to get the maximum return on that investment?  Earlier this summer, I began a series on this precise theme.

I have already written about how to prepare for an intelligence requirements meeting and about how to deal with a virtual intelligence requirements environment.

You can also see the first four articles on things to think about when having a requirements meeting with a decisionmaker (DM) at the links below:

1.  Does the DM really want intelligence?
2.  What kind of intelligence is the DM looking for?
3. What are the DM's assumptions?
Today, I am writing part five and six of this six part epic discussing what intel professionals need to think about when they are actually in the meeting, talking to a decisionmaker about his or her requirements.

5. What constraints is the DM willing to put on the requirement?

I once had a DM who was looking to expand his local business and asked for a nationwide study.  His business was based on serving local customers and he did not have the resources to go nationwide and yet...  

Decisionmakers are notoriously reluctant to put constraints on requirements.  They worry that, if they do, just on the other side of whatever bright line they think they have drawn, there will be a perfect customer for their business, a critical fact that lets them make a foolproof plan to defeat the enemy, or the key piece of info that solves all their problems.  I call this the "pot of gold" syndrome and it afflicts every decisionmaker.  


This worry, of course, blinds these same decisionmakers to the inevitable problem this approach causes:  Given the constant limitations on time and resources, trying to look everywhere makes it difficult for the intelligence unit to look anywhere in depth.  Knowing the areas that are of genuine interest and can genuinely support the decisionmaker helps get the most out of the intelligence effort.  Likewise, knowing where you don't need to look is equally helpful.


There are at least six different kinds of constraints that intelligence professionals need to address when engaged in a discussion about requirements:
  • Geography.  What are the limits to how far we need to look?  Where can we draw lines on our maps?  Geography is used loosely here, by the way.  Understanding and constraining the "market landscape" or the "cyber landscape", for example, also fall within this guidance.
  • Time.  How far forward do you want us to look?  Every problem has a "predictive horizon" beyond which it is hard to see.  Moreover, you will likely see a good bit more detail with a good bit more confidence if you are looking one month out instead of 10 years out.
  • Organizational units.  At what level does the DM want the analysis?  Am I looking at industries, companies or departments within companies?  Countries, regions, or continents?  
  • Processes, functions.  Are there certain processes or functions of the target that the DM cares more about than others?  Are there processes or functions that we could ignore?  For example, imagine a company that doesn't care how its competitor manages its HR but really wants to know about its supply chain.
  • People.  Which people in the target organization are most important to the DM (if any)?  Are we looking at the government of a country or the president of a country?  A competitor or the CEO of that competitor?  Obviously, "both!" might be right answer but asking the question makes it clear to both the DM and the intel unit.
  • Money.  Are there amounts of money about which we do not care?  Do you want me to try to look at every drug transaction or just the large ones?  Is every act of bribery, no matter how trivial, really worth spending the time and energy on in a study of a country's level of corruption?  Again, the answer in both cases may be "yes!" but without asking, the intel unit runs the risk of failing to provide the level of analysis the DM wants and will almost inevitably waste time analyzing issues that the DM cares little about.
6. What are the DM's priorities?

In any sort of robust requirements discussion, it is normal for many more requirements to emerge than the intelligence unit can handle.   Rather than complain about all the work, a better way to handle this is to get the DM to state his/her priorities.  

I have worked with hundreds of DMs and all of them understand resource constraints.  Even with quality intel analysis, I have often seen teams disappoint a DM when they have to say, "We didn't have time/money/people to get to all of your requirements."  I have never, however, seen a DM disappointed when that team can say, "We didn't have time/money/people to get to all of your requirements, but we were able to address your top 5 (or 10 or whatever) requirements."

The key to being able to address the top priorities, however, is knowing what they are.  As with all constraints, DMs are typically hesitant to prioritize their questions.  They may feel that they do not know enough to do so.  They may also be worried that the intelligence unit will put on blinders such that they will only look at the priorities and forget to keep an eye out for unexpected threats and opportunities.  

One of the keys here is to not make assumptions about priorities.  Even if the DM sends the team a numbered list, it makes sense to go back and ask, "Are these in priority order?"  Almost every time I have asked that question - forcing the DM to actively think about their priorities - I get changes to the order.  Likewise, just because a DM talks a lot about a certain issue, do not assume that it is the top priority.  It may just be the most recent thing that has come up or a new idea that the DM just had.  Asking, "We have talked about X quite a bit.  Is this where you would like us to focus?" is still important.

Priorities are an enormously powerful tool for an intelligence unit. They allow the unit to focus and to make tough decisions about what is relevant and what is not.  Don't leave your requirements meeting without them!

Next:  Four Things You Must Do After A Requirements Meeting

Wednesday, August 15, 2018

6 Things To Think About While Discussing Requirements With A Decisionmaker (Part 4)

"Hic sunt dracones!"
How can I use the limited amount of time my decisionmakers have to discuss their intelligence requirements to get the maximum return on that investment?  Earlier this summer, I began a series on this precise theme.

I have already written about how to prepare for an intelligence requirements meeting and about how to deal with a virtual intelligence requirements environment.  Last week, I did the first three things to think about when having a requirements meeting with a DM:
1.  Does the DM really want intelligence?2.  What kind of intelligence is the DM looking for?3. What are the DM's assumptions?
Today, I am writing part four of a six part series discussing what intel professionals need to think about when they are actually in the meeting, talking to a decisionmaker about his or her requirements.

4.  What does the DM mean when he/she/they say "x"?

"I'm worried about Europe.  What moves are our competitors likely to make next?"  This is a perfectly reasonable request from a decisionmaker.  In fact, if you are in a competitive intelligence position for a larger corporation, you have likely heard something close to it.  

While reasonable, it is the kind of requirements statement that is filled with dragons for the unwary.  Not the least of these dragons is definitional.   When the DM said "competitors" did he or she mean competitors that reside in Europe or competitors that sell in Europe or both?  And what did he or she mean by "Europe"?  Continental Europe, the EU, western Europe, something else?

Listening carefully for these common words that are actually being used in very specific ways or are, in a particular organization, technical terms is a critical aspect of a successful requirements meeting.  If the intelligence professional has a long history with a particular decisionmaker then these terms of art may be common knowledge.  Even in this case, however, it is worth confirming with the DM that everyone shares this understanding of these kinds of words.

That is why I consider it best practice to memorialize the requirement in writing after the meeting and to include (usually by way of footnote) any terms defined in the meeting.  In addition, if certain terms weren't defined in the meeting but the intel professional feels the need to define them afterwards, I think it makes sense for the intel professional to make their best guess at what the DM meant but then draw specific attention to the intel professional's tentative definition of the term in question and to seek confirmation of that definition with the DM.  

This may sound like a convoluted process, but, as I tell my students, not getting the requirement right is like building a house on the wrong piece of property.  It doesn't matter how beautiful or elegant it is, if you build it on the wrong piece of property you will still have to tear it down and start all over again.  The same holds true for a misunderstood intelligence requirement.  Get the requirement wrong and it doesn't matter how good your answer is - you answered the wrong question!

Next:  #5 What constraints are the DMs willing to put on the requirement?

Tuesday, August 14, 2018

Call for Papers: Intelligence Community Forum (ICF) - "Intelligence Support for Decision-Makers" (18-20 June 2019, Mercyhurst University)

Brécourt Academic and Mercyhurst University's Ridge College of Intelligence Studies and Applied Sciences, in association with Global War Studies, are pleased to announce the first annual Intelligence Community Forum (ICF). 

An international conference, ICF 2019 will bring together intelligence community professionals from a wide array of disciplines, including academia, government, business, and students. Paper proposals dealing with one or more of the following topics are welcome, and while "Intelligence Support for Decision-Makers" is the general focus, papers and panels covering other related topics or taking thematic approaches are equally encouraged:
National Intelligence / Business Intelligence / Cyberwarfare / Cyber Security
Military Intelligence / Naval/Maritime Intelligence / Indicators and Warnings
Intelligence and Alliance Politics / Inter-Agency Cooperation / Science & Technology Multi-National Intelligence Sharing / Intelligence and Security Studies History of Intelligence / Intelligence and Diplomacy / Industrial Mobilization/ Intelligence Methods and Data Analysis / Intelligence and Assymetric Warfare/Problems of Intelligence Analysis in Early Post-War Planning/Intelligence and Peacekeeping/Peacemaking / NGOs
Paper proposals must be submitted by 15 January 2019 and must include a brief (200 words or less) one-paragraph abstract and a one-page curriculum vitae. Panel proposals are welcome and should include a brief description of the panel's theme.

Additional conference details and registration information will be available soon at:
https://mercyhurst.edu/icf2019

Submissions and inquiries should be addressed to:
Sharon von Maier
E: brecourtacademicadm@gmail.com
T: 202 875 1436 (US number)

or, at Mercyhurst:
Dr. Duncan McGill
E: dmcgill@mercyhurst.edu
T: 814-824-2458

The conference proceedings will be published by Brécourt Academic.

Wednesday, August 8, 2018

6 Things To Think About While Discussing Requirements With A Decisionmaker (Part 3)

"I challenge your assumptions, sir!"
How can I use the limited amount of time my decisionmakers have to discuss their intelligence requirements to get the maximum return on that investment?  Earlier this summer, I began a series on this precise theme.

I have already written about how to prepare for an intelligence requirements meeting and about how to deal with a virtual intelligence requirements environment.  Today, I am writing part three of a six part series discussing what intel professionals need to think about when they are actually in the meeting, talking to a decisionmaker about his or her requirements.

3. What are the DM's assumptions?

There are three kinds of assumptions intelligence professionals need to watch for in their DMs when discussing requirements:
  • About the requirement
  • About the answer to the requirement
  • About the intel team
Consider this requirement:  "Will the Chinese provide the equipment for the expansion of mobile cellphone services into rural Ghana?"  The DM is clearly assuming that there is going to be an expansion of cellphone services.  That doesn't make it a bad requirement but analysts should start by checking this assumption.  

Note also that the DM did not frame the question as "Who is going to provide the equipment...".  Rather, he or she highlighted the potential role of the Chinese.  This kind of framing suggests that the DM thinks he or she already knows the answer to the requirement but just wants a "double check".  Other interpretations are possible, of course, but it is worth noting if only so the intelligence professionals working the issue don't approach the problem with blinders on.

Finally, it is also important to think about the assumptions the DM has about the team working on the requirement.  What does the DM see when he or she looks out at our team?  Are we all young and eager?  Old and grizzled?  Does our reputation - good or bad - precede us?  Finally, is the DM asking the "real" requirement or just what he or she thinks the team can handle?  Not getting at the real questions the DM needs answered is a recipe for failure or, at least, the perception of failure, which is probably worse..

Next Week:  #4 What does the DM mean when he/she/they say "x"?

Tuesday, August 7, 2018

6 Things To Think About While Discussing Requirements With A Decisionmaker (Part 2)

"And what kind of intelligence would the gentleman prefer today?"
How can I use the limited amount of time my decisionmakers have to discuss their intelligence requirements to get the maximum return on that investment?  Earlier this summer, I began a series on this precise theme.

I have already written about how to prepare for an intelligence requirements meeting and about how to deal with a virtual intelligence requirements environment.  Today, I am writing part two of a six part series discussing what intel professionals need to think about when they are actually in the meeting, talking to a decisionmaker about his or her requirements.

2.  What kind of intelligence is the DM looking for?

There are two broad (and informal) categories of intelligence - descriptive and estimative.  Descriptive intelligence is about explaining something that is relevant to the decision at hand.  Estimative intelligence is about what that "something" is likely to do next.  It is the difference between "Who is the president of Burkina Faso now?" and "Who is the next president of Burkina Faso likely to be?"

Estimative intelligence is obviously more valuable than descriptive intelligence.  Estimative intelligence allows the DM and his or her operational staff to plan for the future, to be proactive instead of reactive.  Surprisingly, though, DMs often forget to ask for estimates regarding issues they think will be relevant to their decisions.  It is worth the intelligence professionals time, therefore, to look for places where an estimate might be useful and suggest it as an additional requirement.

While I am never one to look for more work, the truth is that descriptive intelligence is becoming easier and easier to find.  The real value in having dedicated intel staff is in that staff's ability to make estimates.  If all you do is what computers do well (IE describe) then you run the risk of being downsized or eliminated the next time there is a budget crunch.

Tomorrow:  #3 What are the DM's assumptions?

Monday, August 6, 2018

6 Things To Think About While Discussing Requirements With A Decisionmaker

An intel professional successfully gets everything he needs from a
DM in a requirements briefing.  Guess which one is the unicorn...
How can I use the limited amount of time my decisionmakers have to discuss their intelligence requirements to get the maximum return on that investment?  Earlier this summer, I began a series on this precise theme.

I have already written about how to prepare for an intelligence requirements meeting and about how to deal with a virtual intelligence requirements environment.  Today, I am writing part one of a six part series discussing what intel professionals need to think about when they are actually in the meeting, talking to a decisionmaker about his or her requirements.

1.  Does the DM really want intelligence?

It goes without saying that an organization's mission is going to drive its intel requirements.  Whether the goal is to launch a new product line or take the next hill, decisionmakers need intel to help them think through the problem.

Unfortunately, DMs often conflate operational concerns ("What are we going to do?" kinds of questions) with intel concerns ("What is the other guy going to do?" kinds of questions).  This is particularly true in a business environment where intelligence as a distinct function of business is a relatively new concept.

Good intelligence requirements are typically about something which is important to an organization's success or failure but which is also outside that organization's control.  Good intelligence requirements are, in short, about the "other guy" - the enemy, the competitor, the criminal - or, at least, about the external environment.

Intelligence professionals need to be able to extract intelligence requirements from this broader conversation, play them back to the DM to confirm that both parties understand what needs to be done before they go to work.

Tomorrow:  #2 What kind of intelligence is the DM looking for?

Thursday, July 19, 2018

How To Write A Mindnumbingly Dogmatic (But Surprisingly Effective) Estimate (Part 2 - Nuance)

In my last post on this topic, I outlined what I considered to be a pretty good formula for a pretty good estimate:

  • Good WEP +
  • Nuance +
  • Due to's +
  • Despite's +
  • Statement of AC = 
  • Good estimate!
I also talked about the difference between good WEPs, bad WEPs and best WEPs and if you are interested in all that, go back and read it.  What I intend to talk about today is the idea of nuance in an estimate.

Outline of the series so far (Click for full page version)
Let me give you an example of what I mean:  
  • The GDP of Yougaria is likely to grow.
  • The GDP of Yougaria is likely to grow by 3-4% over the next 12 months.
Both of these are estimates and both of these use good WEPs but one is obviously better than the other.  Why?  Nuance.

Mercyhurst Alum Mike Lyden made a stab at defining what we mean by "nuance" in his 2007 thesis, The Efficacy of Accelerated Analysis in Strategic Level Intelligence Estimates.  There he defined it as how many of the basic journalistic questions (Who, What, When, Why, Where, and How) the estimate addressed.  

For example, Mike would likely give the first estimate above a nuance score of 1.  It really only answers the "What" question.  I think he would give the second estimate a 3 as it appears to answer not only the "What" question but also the "When" and "How (or how much)" questions as well.  Its not a perfect system but it makes the point.

In general, I think it is obvious that more nuance is better than less.  A more nuanced estimate is more likely to be useful and it is less likely to be misinterpreted.  There are some issues that crop up and need to be addressed, however - nuances to the nuance rule, if you will.
  • What if I don't have the evidence to support a more nuanced estimate?  Look at the second estimate above.  What if you had information to support a growing economy but not enough information (or too much uncertainty in the information you did have) to make an estimate regarding the size and time frame for that growth?  I get it.  You wouldn't feel comfortable putting numbers and dates to this growth.  What would you feel comfortable with?  Would you be more comfortable with an adverb ("grow moderately")?  Would you be more comfortable with a date range ("over the next 6 to 18 months")?  Is there a way to add more nuance in any form with which you can still be comfortable as an analyst?  The cardinal rule here is to not add anything that you can't support with facts and analysis - that you are not willing to personally stand behind.  If, in the end, all you are comfortable with is "The economy is likely to grow" then say that.  I think, however, if you ponder it for a while, you may be able to come up with another formulation that addresses the decisionmaker's need for nuance and your need to be comfortable with your analysis.
  • What if the requirement does not demand a nuanced estimate?  What if all the decisionmaker needed to know was whether the economy of Yougaria was likely to grow?  He/She doesn't need to know any more to make his/her decision.  In fact, spending time and effort to add nuance would actually be counterproductive.  In this case, there is no need to add nuance.  Answer the question and move on.  That said, my experience suggests that this condition is rather more rare than not.  Even when DMs say they just need a "simple" answer, they often actually needs something, well, more nuanced.  Whether this is the case or not is something that should be worked out in the requirements process.  I am currently writing a three part series on this and you can find Part 1 here and Part 2 here.  Part 3 will have to wait until a little later in the summer.
  • What if all this nuance makes my estimate sound clunky?  So, yeah.  An estimate with six clauses in it is going to be technically accurate and very nuanced but sound as clunky and awkward as a sentence can sound.  Well-written estimates fall at the intersection of good estimative practice and good grammar.  You can't sacrifice either, which is why they can be very hard to craft.  The solution is, of course, to either refine your single estimative sentence or to break up the estimative sentence into several sentences.  In my next post on this, where I will talk about "due to's and "despite's", I will give you a little analytic sleight of hand that can help you with this problem.

Monday, July 16, 2018

Farengar Secret-Fire Has A Quest For You! Or What Video Games Can Teach Us About Virtual Intel Requirements

A couple of weeks ago, I wrote a post about the 3 Things You Must Know Before You Discuss Intelligence Requirements With A Decisionmaker.  That post was designed for intel professionals who have the luxury of being able to sit down with the decisionmakers they support and have a conversation with them about what it is they want from their intelligence unit.  I also stated that doing this in a virtual environment or on an automated requirements management system like COLISEUM was both more difficult and something I would discuss in the future.

Well, today is your lucky day!  The future is here!


When I think about who does requirements best in a virtual environment, I think about video games.  Particularly, I think about massively multi-player online role-playing games (MMORPGs for short).  Very, very specifically, I think about the questing systems that are standard fare in virtually all of these types of games.  

Quests are the requirements statements of games.  I have included an example of a quest below (it is from a game called Skyrim which is awesome and highly recommended).  These quests differ from intel requirements in that almost all of them are operationally focused (What do we need to do to accomplish our mission?) instead of intelligence focused (What do we need to know about the other guy to accomplish our mission?).  That said, there are still a number of things we can learn from a well-formulated quest that will make intel requirements in a virtual environment easier to craft and to understand.

Retrieved from Immersive Questing
Video game designers know they have to get quests right the first time.  They don't have an opportunity to talk to the players outside the game so all the necessary information needs to be in the quest itself.  On the other hand, they have to make the quest seem realistic.  Failing to maintain this balance runs the risk of creating an unplayable game.  As a result, video game designers have developed a number of conventions that allow the quests to sound real but be complete.  Intel has the "real" part down so it is about making sure that it is complete that matters.  In this respect, the final version of a good virtual intel requirement bears a remarkable resemblance to the final version of a good quest.  Here are the specifics:

  • They both provide background.  Why am I doing this?  What is the context for this quest?  In video games, putting the quest in context allows the story to unfold.  In intelligence work, this context allows the intelligence professional to better understand the decisionmaker's intent.  This, in turn, allows the intelligence professional to have a better understanding of the kinds of information and estimates that will prove most useful.
  • They both define terms.  In the quest above, I am to look for a Dragonstone.  What is a Dragonstone?  The quest defines that for me.  In intelligence work, agreeing on definitions of terms (particularly common terms) is incredibly helpful.  For example, you get a request to do a stability study on Ghana.  What term needs to be defined before you go ahead?  Stability.  We do this exercise every year in our intro classes.  There are multiple definitions of stability out there.  Which one is most appropriate for this decisionmaker is a critical question to ask and answer before proceeding.
  • They both use terms consistently.  If I encounter another quest asking for me to find a Dragonstone, I can count on it being the same thing I am looking for in this quest.  Likewise, in an intelligence requirement, if I define a term in a certain way in one place, I will use that term - not what I think is a synonym, no matter how reasonable it sounds to me - consistently throughout the requirement.  
  • They both often come in standard formats.  All video game players are familiar with a variety of standard quest formats such as the Fetch Quest (like the one above) where the task is to go, get something, and bring it back.  Intelligence requirements also come in more-or-less standard forms such as requests for descriptive or estimative intelligence.  Categorizing requests for intelligence and then studying them for similarities should allow an intelligence unit to develop a list of useful questions to ask based simply on the type of request it is.   

Requirements statements, whether managed in person or virtually, are almost always going to start out messy.  Without the advantage of a back-and-forth, personal conversation, the virtual requirements process has a greater potential, however, for breakdown.  Thinking of the requirement as a quest allows intelligence professionals to re-frame the process and focus on the essential elements of the requirement and, perhaps,  anticipate and address predictable points of potential failure in advance.

Look for the final part of this series later this summer when I talk about all the things you need to think about in the middle of requirements discussion with a decisionmaker!

Tuesday, July 10, 2018

How To Write A Mindnumbingly Dogmatic (But Surprisingly Effective) Estimate

At the top end of the analytic art sits the estimate.  While it is often useful to describe, explain, classify or even discuss a topic, what, as Sun Tzu would say, "enables the wise sovereign and the good general to strike and conquer, and achieve things beyond the reach of ordinary men, is foreknowledge."  Knowing what is likely (or unlikely) to happen is much more useful when creating a plan than only knowing what is happening.

How to Write a Mindnumbingly Dogmatic (but Surprisingly Effective) Estimate (Outline)
Estimates are like pizza, though.  There are many different ways to make them and many of those ways are good.  However, with our young analysts, just starting out in the Mercyhurst program, we try to teach them one good, solid, never fail way to write an estimate.  You can sort of think of it as the pepperoni pizza of estimates.

Here's the formula:

  • Good WEP +
  • Nuance +
  • Due to's +
  • Despite's +
  • Statement of AC = 
  • Good estimate!
I'm going to spend the next couple of posts breaking this down.  Let's start with what makes a good Word of Estimative Probability - a WEP.   Note:  Linguistic experts call these Verbal Probability Expressions and if you want to dive into the literature - and there's a lot - you should use this phrase to search for it.  

WEPs should first be distinguished from words of certainty.  Words of certainty, such as "will" and "won't" typically don't belong in intelligence estimates.  These words presume that the analyst has seen the future and can speak with absolute conviction about it.  Until the aliens get back with the crystal balls they promised us after Roswell, it's best if analysts avoid words of certainty in their estimates.


Notice I also said "good" WEPs, though.  A good WEP is one that effectively communicates a range of probabilities and a bad WEP is one that doesn't.  Examples?  Sure!  Bad WEPs are easy to spot:  "Possibly", "could", and "might" are all bad WEPs.  They communicate ranges of probability so broad that they are useless in decisionmaking.  They usually only serve to add uncertainty rather than reduce it in the minds of decisionmakers.  You can test this yourself.  Construct an estimate using "possible" such as "It is possible that Turkey will invade Syria this year."  Then ask people to rank the likelihood of this statement on a scale of 1-100.  Ask enough people and you will get everything from 1 TO 100.  This is a bad WEP.


Good WEPs are generally interpreted by listeners to refer to a bounded range of probabilities.  Take the WEP "remote" for example.  If I said "There is a remote chance that Turkey will invade Syria this year" we might argue if that means there is a 5% chance or a 10% chance but no one would argue that this means that there is a 90% chance of such an invasion.



The Kesselman List
Can we kick this whole WEP thing up a notch?  Yes, we can.  It turns out that there are not only "good" WEPs but there are "best" WEPs.  That is, there are some good WEPs that communicate ranges of probabilities better than others.  Here at Mercyhurst, we use the Kesselman List (see above).  Alumna Rachel Kesselman wrote her thesis on this topic a million years ago (approx.).  She read all of the literature then available and came up with a list of words, based on that literature, that were most well defined (i.e. had the tightest range of probabilities).  The US National Security Community has its own list but we like Rachel's better.  I have written about this elsewhere and you can even read Rachel's thesis and judge for yourself.  We think the Kesselman List has better evidence to support it.  That's why we use it.  We're just that way.

Before I finish, let me say a word about numbers.  It is entirely reasonable and, in fact, may well be preferable, to use numbers to communicate a range of probabilities rather than words.  In some respects this is just another way to make pizza, particularly when compared to using a list where words are explicitly tied to a numerical range of probabilities.  Why then, do I consider it the current best practice to use words?  There are four reasons:

  • Tradition.  This is the way the US National Security Community does it.  While we don't ignore theory, the Mercyhurst program is an applied program.  It seems to make sense, then, to start here but to teach the alternatives as well.  That is what we do.  
  • Anchoring bias.  Numbers have a powerful place in our minds.  As soon as you start linking notoriously squishy intelligence estimates to numbers you run the risk of triggering this bias.  Of course, using notoriously squishy words (like "possible") runs the risk of no one really knowing what you mean.  Again, a rational middle ground seems to lie in a structured list of words clearly associated with numerical ranges.
  • Cost of increasing accuracy vs the benefit of increasing accuracy.  How long would you be willing to listen to two smart analysts argue over whether something had an 81% or an 83% chance of happening?  Imagine that the issue under discussion is really important to you.  How long?  What if it were 79% vs 83%?  57% vs 83%?  35% vs 83%?  It probably depends on what "really important" means to you and how much time you have.  The truth is, though, that wringing that last little bit of uncertainty out of an issue is what typically costs the most and it is entirely possible that the cost of doing so vastly exceeds the potential benefit.  This is particularly true in intelligence questions where the margin of error is likely large and, to the extent that the answers depend on the intentions of the actors,  fundamentally irreducible.  
  • Buy-in.  Using words, even well defined words, is what is known as a "coarse grading" system.  We are surrounded with these systems.  Our traditional, A, B, C, D, F grading system used by most US schools is a coarse grading system as is our use of pass/fail on things like the driver's license test.  I have just begun to dig into the literature on coarse grading but one of the more interesting things I have found is that it seems to encourage buy-in.  We may not be able to agree on whether it is 81% or 83% as in the previous example, but we can both agree it is "highly likely" and move on.  This seems particularly important in the context of intelligence as a decision-support activity where the entire team (not just the analysts) have to take some form of action based on the estimate.  
I'll talk about the rest of the "formula" later in the summer!

Monday, July 2, 2018

3 Things You Must Know Before You Discuss Intelligence Requirements With A Decisionmaker

One of the most important tasks of virtually all intelligence professionals is the ability to sit down with their organization's decisionmakers and get meaningful intelligence requirements from them.  Getting requirements that are too vague or poorly designed make the intelligence professional's life more difficult.  More importantly, bad requirements often lead to analysis that fails to meet the decisionmaker's real needs and can, in turn, lead to the organization's failure.

All that makes perfect sense, right?  Getting a good answer to a question implies that the question is clear, that I understand the question and that I have the ability to answer it.  If I ask you, "What time is the movie?" then you are well within your rights to ask me, "Which movie?"  Good requirements emerge from a conversation; they aren't dictated through a megaphone.


Outline of this post (Trying something new here.  Let me know in the comments if you like it!)

Having this kind of requirements discussion is much more difficult in the context of intelligence, however, and not only because the questions are usually much more complicated.  There are a number of reasons for these challenges:
  • Chain of command.  Typically, intel officers work for the decisionmaker.  Even with the best of DM's, there is often a real reticence to poke at the requirement, to make suggestions about how to make it better or to question whether it is worth addressing at all.  While it is true that pushing the DM for clarity on his/her requirements statements is "just part of the job", it does not make the situation any less challenging. 
  • Lack of understanding about intel.  Most decisionmakers rise up through operational channels.  This means that decisionmakers are usually much more comfortable with operational questions (I.E. What are we going to do with the resources under our control?) than with intelligence questions (I.E. What is happening that is critical to our success or failure but outside of our control?).  Even in the national security realm, where the intelligence function is typically much better understood than in law enforcement or corporations, there is often a lack of understanding or even a misunderstanding of how intelligence supports the organization's decisionmaking process.  
  • Ops/Intel Conflation.  While there are good reasons to keep many operational discussions and intelligence discussions separate, that is not the way the decisionmaker is likely to think.  Responsible for integrating intelligence analysis with operational capabilities and constraints, decisionmakers are likely to conflate the two as they talk about requirements.  It is up to intelligence professionals to untangle them in such a way that they have a clear statement of their requirements. 
  • Lack of decisionmaker clarity.  Decisionmakers don't know what they don't know and good decisionmakers worry about that - a lot.  Even when decisionmakers fully understand intel, it is possible for them to have only a vague notion of what they want or need.  Particularly with strategic-level concerns, good DMs will be constantly asking themselves, "What questions should I be asking right now?" and worrying about wasting time and energy chasing an irrelevant question down a rabbit hole.
With this as background there are three essential questions that intelligence professionals should ask and answer before they begin a discussion about requirements:

  1. What does the organization do?  At first glance this seems ridiculous.  How could you work for an organization and not know what it does?  You'd be surprised.  Even small organizations often appear to do one thing but actually spend much of their time or make most of their money doing something entirely different.  When I was younger, for example, I worked for a company called Hargrove's Office Supplies.  You would be excused for thinking that we made our money selling office supplies.  In fact, Hargrove's made most of its money in those days selling and servicing business machines - a very different kind of business.  This problem becomes much more acute in large organizations with many moving parts.  It is worth the intelligence professional's time to get to know the organization it is supporting in some detail - everything from strategic plans to tactical practices.  While the intelligence professional will never be as knowledgeable as the operators running the organization, the more intel professionals knows about the goals and purposes of an organization, the more productive the requirements process will be.
  2. What is the current strategy and situation of the organization?  If the first question is what does the organization do, then the second question should be "How does it do it?"  All organizations have a strategy (even if it is only an implicit one) and it is worth it to take time to consider what that strategy might be.  It is also worth thinking about the current situation in which the organization finds itself.  Is the organization winning or losing?  Successful and growing or failing and losing ground against its competitors? While the situation of the organization should not matter in terms of the analysis - it is what it is - understanding how an organization is doing helps understand where a requirement is coming from and gives insight into how to focus the answer.
  3. Who is the decisionmaker?  This is another simple question with a complicated answer.  It is tempting to believe that the person or organization asking the question is the one who wants the answer.  That is not always the case.  Oftentimes, the real decisionmaker is one or more levels removed from the person asking the question of the intelligence unit.  In this case, it makes sense for the intelligence professionals to ask themselves what the the real decisionmaker wants.  In the accelerating pace of the intel world, it is entirely possible that the requirement has gone through an elaborate version of the kid's game Telephone and now bears no relationship to what the real decisionmaker wants.  Even if it does, it is still worth thinking about the kind of answer that will meet the needs of not only the gatekeeper but also of the decisionmaker behind the gate.  Finally, even if there is no gatekeeper, it is worth thinking about others who might not have asked the question but will be able to see the answer.  Almost nothing get done in a vacuum.  Even the most siloed of programs often have multiple members with different intelligence needs.  It is important, therefore, to consider who these second and third level audiences might be before crafting the requirement in order to provide clarity and prevent confusion and mission creep.
All this advice is great for when intel professionals have the luxury of actually meeting with the decisionmakers they support.  How do you deal with a situation that is entirely virtual or managed through an automated requirements management system like COLISEUM?  Don't worry, we will get to all of that later in the summer!

Thursday, June 21, 2018

What Do You Want In A Cyber Self Defense Course?

Your company, agency, whatever has hired an intern from the Mercyhurst intel program that has just completed their freshman year.  What do you want them to know about cyber?


That is one of the questions I will be wrestling with this summer.  I am teaching a new course in the fall called "Cyber Self Defense".  Nobody told me I had to teach this course.  Nope!  I volunteered (!) to teach this course.

You see, we have consistently noted that many of our first year students come to us with a pretty poor understanding of cyber related risks and how to minimize them.  The intent of this course is not to turn them all into white hat hackers.  All I really hope to do in the time I have is to make them into knowledgeable users.   
Its like the old joke about the two guys and the bear.  The first guys says, We will never outrun that bear!"  And the second guy goes, "I don't have to outrun the bear.  I just have to outrun you!"  I want to create users that can, at least, outrun the other guy.
We wanted to teach this class at the Freshman level because that is where we think it would be most useful.  It gives the students 3 more years to increase or at least use these skills and an educated user base will only help our own network become more secure.  If this first class goes well, I think I would recommend that it become a requirement for all intel students.

As the obvious wonderfulness of this offering became increasingly apparent, the question naturally arose, "Who will teach this magical, extraordinary course?"  Those of you of a certain age will remember the old Life cereal commercial lovingly preserved by YouTube (above).  Suffice it to say, I get to play the role of "Mikey" in the 2018 remake...

So I throw it out to you, Gentle Readers, what skills would you expect, what abilities would you want to see in that 18 year old intern you just hired for the summer?  I am looking for tools, tips, tricks, websites, sources, absolutely-must-cover topics, don't-waste-your-time topics and everything in between.  Free software and resources will be most appreciated but making students pay to get something that gives a big bang for the buck is also OK.

Here are a few details about the class to help you think through the problem.  It is a MWF class and each class lasts 50 minutes for 15 weeks.  I have access to a computer lab but I think I want the class to mostly be about their own devices - specifically cell phones and laptops (which virtually all students have).  We don't have a standard when it comes to these devices so we will likely have a mix of Apple and Windows, Android and IOS (With Windows and Android machines likely being in the majority).

Here are my initial thoughts:
  • First couple of weeks:  Focus on cleaning up and maintaining their own devices.  My assumption is that at least some of these students will come in with malware or viruses on their system already. Almost all will come in with some sort of factory installed bloatware and I doubt if any of their browser caches have ever been emptied.  The goal here would be to clean all of this up and to teach them how to maintain their devices
  • Next couple of weeks.  Focus on likely attack profiles and how to deal with situations where some sort of hack is more likely (e.g. coffee shops and airports).  Things like phishing and social engineering would get covered here.
  • Mid course.  Focus on privacy.  Talk about how info on the web gets passed around and used.  Talk about how to protect yourself from oversharing and what to do if you do get hacked.
  • Next couple of weeks.  Focus on advanced topics (e.g. Proxy servers, VPNs, Linux, etc).  Should they build their own computer?  
  • Final couple of weeks.  Talk about how to diagnose/help others with problems.  One of the most powerful tests of learning is seeing if the student can transfer their knowledge to new situations.  I want this kind of thing to be part of the final exam somehow.
I want this to be a project based course that gives students lots of hands on with their own devices but also gives them enough conceptual knowledge to be able to integrate new stuff as it comes along. 

I have a bunch of other half formed thoughts but I welcome your input and feedback first.  You can either drop it in the comments below (or in any of the social media where this will be posted) or you can just send me a note at kwheaton at mercyhurst dot edu.

Many thanks, hive mind!  Many thanks!