Showing posts with label intelligence. Show all posts
Showing posts with label intelligence. Show all posts

Thursday, July 25, 2019

Why The Next "Age of Intelligence" Scares The Bejesus Out Of Me

A little over a month ago, I wrote a post titled How To Teach 2500 Years Of Intelligence History In About An Hour.   The goal of that post was to explain how I taught the history of intelligence to new students. Included in that article was the picture below:


I am not going to cover all the details of the "Ages of Intelligence" approach again (you can see those at this link), but the basic idea is that there are four pretty clear ages.  In addition, I made the case that, driven by ever changing technology as well as corresponding societal changes, the length of these ages is getting logarithmicly shorter. 

Almost as an afterthought, I noted that the trend line formed by these ever shortening ages was approaching the X-intercept.  In other words, the time between "ages" was approaching zero.  In fact, I noted (glibly and mostly for effect) that we could well be in a new "Age of Intelligence" right now and not know it.

When I publish a piece like the one mentioned above, I usually feel good about it for about ten minutes.  After that, I start to think about all the stuff I could have said or where to go next with the topic.  In this case, the next step was obvious--a little speculative thinking about what comes, well, now.  What I saw was not pretty (and, to be frank, a little frightening).

Looking out 10 years, I see five hypotheses (The base rate, therefore, for each is 20%).  I will indicate what I think are the arguments for and against each hypothesis, and then, how I would adjust the probability from the base rate.  

The Age of Anarchy  
No one knows what is going on, no one knows what to do about it.  Technology just keeps changing and improving at an ever increasing pace, and no one person or even one organization (no matter how large) can keep up with it.  Strategic intelligence is worthless and even tactical intelligence has only limited utility.

Arguments for:  This is certainly what life feels like right now for many people.  Dylan Moran's rant probably captures this hypothesis far better than I could:




Arguments against:   This is a form of the same argument that has been made against every technological advance since the Ancient Greeks (Socrates, for example, was against writing because it "will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing..."  Replace "writing with "books" or "computers" or "cell phones" and you have another variation on this Luddite theme).  In short, every age has had to adjust to the risks and rewards new technologies bring.  The next age of intelligence is unlikely to be new in this respect.

Probability:  17%

Age of Irrelevance
Artificial intelligence (AI) takes over the world.  The algorithms get so good at understanding and predicting that we increasingly turn over both our intelligence production and our decisionmaking to the computers.  In this hypothesis, there is still a need to know the enemy there is just no longer a need for us to do all those tedious calculations in our tents.  The collection of intelligence information and the conduct of intelligence analysis becomes an entirely automated process.

Arguments for:  Even a cursory look at the Progress in Artificial Intelligence article in Wikipedia suggests two things.  First, an increasing number of complex activities where humans used to be the best in the world are falling victim to AI's steady march.  Second, humans almost always underestimate just how quickly machines will catch up to them.  Efforts by the growing number of surveillance states will only serve to increase the pace as they move their populations in the direction of the biases inherent in the programming or the data.  

Arguments against:  AI may be the future, but not now and certainly not in the next ten years.  Four polls of researchers done in 2012-13 indicated that that there was only a 50% chance of a technological singularity--where a general AI is as smart as a human--by 2040-2050.  The technology gurus at Gartner also estimated in 2018 that general artificial intelligence is just now beginning to climb the "hype cycle" of emerging technologies and is likely more than 10 years away.  The odds that this hypothesis becomes reality go up after ten years, however.

Probability:  7%

Age of Oligarchy
Zuckerberg, Gates, Nadella, Li, Bezos, Musk, Ma--their names are already household words.  Regular Joe's and Jane's (like you and me) get run over, while these savvy technogeeks rule the world.  If you ain't part of this new Illuminati, you ain't $h!t.  Much like the Age of Concentration, intelligence efforts will increasingly focus on these oligarchs and their businesses while traditional state and power issues take a back seat (See Snow Crash).

Arguments for:  92% of all searches go through Google, 47% of all online sales go through Amazon, 88% of all desktop and laptop computers run Windows.  These and other companies maintain almost monopoly-like positions within their industries.  By definition, the oligarchy already exists.

Arguments against:  Desktops and laptops may run on Windows but the internet and virtually all supercomputers--that is, the future--run on Linux based systems.  Browsers like Brave and extensions like Privacy Badger will also make it more difficult for these companies to profit from their monopoly positions.  In addition, an increasing public awareness of the privacy issues associated with placing so much power in these companies with so little oversight will expand calls for scrutiny and regulation of these businesses and their leaders.

Probability:  27%

Age of Ubiquity
We start to focus on our digital literacy skills.  We figure out how to spot liars and fakes and how to reward honest news and reviews.   We teach this to our children.  We reinforce and support good journalistic ethics and punish those who abandon these standards.  We all get smart.  We all become--have to become--intelligence analysts.

Arguments for:   Millennials and Gen Z are skeptical about the motives of big business and are abandoning traditional social media platforms in record numbers.  They are already digital natives, unafraid of technology and well aware of its risks and rewards.  These generations will either beat the system or disrupt it with new technologies.

Arguments against:   Human nature.  Hundreds of books and articles have been written in the last decade on how powerful the biases and heuristics hardwired into our brains actually are.  We are programmed to seek the easy way out, to value convenience over truth, and to deceive ourselves.  Those who do happen to figure out how to beat the system or disrupt it are likely to hold onto that info for their own economic gain, not disperse it to the masses.

Probability:  12%

Blindside Hypothesis
Something else, radically different than one of approaches above, is going to happen. 

Arguments for:   First, this whole darn article is premised on the idea that the "Ages of Intelligence" approach is legit and not just a clever pedagogical trick.  Furthermore, while there are lots of good, thoughtful sources regarding the future, many of them, as you can see above, contradict.  Beyond that:

  • This is a complex problem, and I generated this analysis on my own with little consultation with other experts.  
  • Complex problems have "predictive horizons"--places beyond which we cannot see--where we are essentially saying, "There is a 50% chance of x happening, plus or minus 50%."
  • I have been thinking about this on and off for a few weeks but have hardly put the massive quantities of time I should to be able to make these kinds of broad assessments with any confidence.  
  • The lightweight pro v. con form of my discussion adds only a soupçon of structure to my thinking.    
  • Finally, humans have a terrible track record of predicting disruption and I am decidedly human.  
Bottomline:  The odds are good that I am missing something.

Arguments against:  What?  What am I missing?  What reasonable hypothesis about the future, broadly defined, doesn't fall into one of the categories above? (Hint:  Leave your answer in the comments!)

Probability:  37% 

Why This Scares Me
Other than the rather small probability that we all wake up one morning and become the critical information collectors and analysts this most recent age seems to demand of us, there aren't any good outcomes.   I don't really want chaos, computers or a handful of profit-motivated individuals to control my digital and, as a result, non-digital life.  I also fully realize that, in some sense, this is not a new revelation.  Other writers, far more eloquent and informed than I, have been making some variation of this argument for years.  

This time, however, it is more personal.  Intelligence leads operations.  Understanding the world outside your organization's control drives how you use the resources under your control.  My new employer is the US Army and the US Army looks very different in the next ten years depending on which of these hypotheses becomes fact. 

Monday, June 24, 2019

EPIC 2014: The Best/Worst Forecast Ever Made?

The eight minute film, EPIC 2014, made a huge impact on me when it was released in 2004.  If you have seen it before, it's worth watching it again.  If you haven't, let me set it up for you before you click the play button below.   


Put together by Robin Sloan and Matt Thompson way back in 2004, EPIC 2014 talked about the media landscape in 2014 as if it had already happened.  In other words, they invented a "Museum of Media History", and then pretended, in 2004, to look backward from 2014 as a way of exploring how they thought the media landscape would change from 2004 to 2014.  Watch it now; it will all make sense when you do:

 
In some ways, this is the worst set of predictions ever made.  Almost none of the point predictions are correct.  Google never merged with Amazon, Microsoft did not buy Friendster, The New York Times did not become a print-only publication for the elderly, and Sony's e-paper is not cheaper than real paper (It costs 700 bucks and gets an average of just 3 stars (on Sony's site!)).

Sloan and Thompson did foresee Google's suite of online software services but did not really anticipate competition from the likes of Facebook, Twitter, LinkedIn, YouTube or any of a host of other social media services that have come to dominate the last 15 years.

None of that seemed particularly important to me, however.  It felt like just a clever way to get my attention (and it worked!).  The important part of the piece was summed up near the end instead.  EPIC, Sloan and Thompson's name for the monopolized media landscape they saw by 2014, is: 
"...at its best and edited for the savviest readers, a summary of the world—deeper, broader and more nuanced than anything ever available before ... but at its worst, and for too many, EPIC is merely a collection of trivia, much of it untrue, all of it narrow, shallow, and sensational.  But EPIC is what we wanted, it is what we chose, and its commercial success preempted any discussions of media and democracy or journalistic ethics."
Switch out the word "EPIC" with the word "internet" and that still seems to me to be one of the best long-range forecasts I've ever seen.   You could throw that paragraph up on almost any slide describing the state of the media landscape today, and most of the audience would likely agree.  The fact that Sloan and Thompson were able to see it coming way back in 2004 deserves mad props.

It also causes me to wonder about the generalizability of the lessons learned from forecasting studies based on resolvable questions.  Resolvable questions (like "Will Google and Amazon merge by December 31, 2014?") are fairly easy to study (easier, anyway).  Questions which don't resolve to binary, yes/no, answers (like "What will the media landscape look like in 2014?") are much harder to study but also seem to be more important.  

We have learned a lot about forecasting and forecasting ability over the last 15 years by studying how people answer resolvable questions.  That's good.  We haven't done that before and we should have.  

Sloan and Thompson seemed to be doing something else, however.  They weren't just adding up the results of a bunch of resolvable questions to see deeper into the future.  There seems to me to be a different process involved.  I'm not sure how to define it.  I am not even sure how to study it.  I do think, that, until we can, we should be hesitant to over-apply the results of any study to real world analysis and analytic processes.

Tuesday, June 18, 2019

What Is #COOLINT?

Apollo 11 in Real-Time is the very definition of cool.
HUMINT, SIGINT, OSINT--the specialized language of intelligence is all ate up with acronyms for the various collection disciplines.  Intel wags have (for at least the last 40 years I have been doing this stuff) come up with a variety of clever (?) plays on this formulation.  For example:  RUMINT = Intelligence founded on rumors alone.  DUMBINT = Intelligence too stupid to believe.

COOLINT is usually reserved for something that is, well, cool but might not be particularly relevant to the question at hand.  You want to show COOLINT to other people.  You KNOW they will be interested in it.  It's the clickbait of the intel world.

A great example of COOLINT is the Apollo 11 In Real-time website (the mobile version is OK but you will want to look at it on your PC or MAC.  Trust me).  In fact, I used the hashtag "#COOLINT" when I tweeted out this site this morning.  The guys who put this amazing site together have mashed up all of the audio and video, all of the commentary, and all of the pictures into a single website that allows you to follow along with the mission from T - 1 minute to splashdown.  It doesn't really have anything to do with intelligence, but, to a spacegeek like me, you find the Apollo 11 in Real-time website next to the word "cool" in the dictionary.

I intend to argue here, however, that there is a more formal definition of COOLINT, one that is actually useful in analytic reporting.  To do this, I want to first briefly explore the concepts of "relevant" and "interesting"

One of the hallmarks of good intelligence analysis is that it be relevant to the decisionmaker(s) being supported.  ICD 203 makes this mandatory for all US national security intel analysts but, even without the regulation, relevance has long been the standard in intel tradecraft.

"Interesting" is a term which gets significantly less attention in intel circles.  There is no requirement that good intel be interesting.  It is ridiculous to think that good intel should meet the same standards as a good action movie or even a good documentary.  That said, if I have two pieces of information that convey the same basic, relevant facts and one is "interesting" and other is not (for example, 500 words of statistical text vs. one chart), I would be a bit of a fool not to use the interesting one.  Intel analysts don't just have a responsibility to perform the analysis, they also have a responsibility to communicate it to the decisionmaker they are supporting.  "Interesting" is clearly less important than "relevant" but, in order to communicate the analysis effectively, something that has to be considered.

With all this in mind, it is possible to construct a matrix to help an analyst think about the kinds of information they have available and where it all should go in their analytic reports or briefings:
"Interesting" vs. "Relevant" in analytic reporting
Interesting and relevant information should always be considered for use in a report or brief.  Length or time limits might preclude it, but if it meets both criteria, and particularly if it is a linchpin or a driver of the analysis, this kind of info highly likely belongs in the report.

Relevant information which is not particularly interesting might have to go in the report--it may be too relevant not to include.  However, there are many ways to get this kind of info in the report or brief.  Depending on the info's overall importance to the analysis, it might be possible to include it in a footnote, annex, or backup slide instead of cluttering up the main body of the analysis.

Information that is interesting but not relevant is COOLINT.  It is that neat little historical anecdote that has nothing to do with the problem, or that very cool image that doesn't really explain anything at all.  The temptation to get this stuff into the report or brief is great.  I have seen analysts twist themselves into knots to try to get a particular piece of COOLINT into a briefing or report.  Don't do it.  Put it in a footnote or an annex if you have to, and hope the decisionmaker asks you a question where your answer can start with, "As it so happens..."

Info which is not interesting and not relevant needs to be left out of the report.  I hope this goes without saying.

Three caveats to this way of thinking about info.  First, I have presented this as if the decision is binary--info is either relevant OR irrelevant, interesting OR uninteresting.  That isn't really how it works.  It is probably better to think of these terms as if they were on a scale that weighs both criteria.  It is possible, in other words, to be "kind of interesting" or "really relevant."

The other caveat is that both the terms interesting and relevant should be defined in terms of the decisionmaker and the intelligence requirement.  Relevancy, in other words, is relevancy to the question; "interesting", on the other hand, is about communication.  What is interesting to one decisionmaker might not be to another.

Finally, if you use this at all, use it as a rule of thumb, not as a law.  There are always exceptions to these kinds of models.  

Monday, June 10, 2019

How To Teach 2500 Years Of Intelligence History In About An Hour

Original version of the Art of War by Sun-Tzu
As with most survey courses, Introduction to Intelligence Studies has a ton of information that it needs to cover--all of it an inch deep and mile wide.  One of the most difficult parts of the syllabus to teach, however, is intelligence history.

Whether you start with the Bible or, as I do, with Chapter 13 of The Art Of War, you still have 2500 years of history to cover and typically about an hour long class to do it.  Don't get me wrong.  I think the history of intelligence ought to be at least a full course in any intelligence studies curriculum.  The truth is, though, you just don't have time to do it justice in a typical Intel 101 course.

I was confronted with this exact problem last year.  I had not taught first-year students for years, and when the time came in the syllabus to introduce these students to intel history, I was at a bit of a loss.  Some professors gloss over ancient history and start with the National Security Act of 1947.  Some compress it even more and focus entirely on post Cold War intelligence history.  Others take a more expansive view and select interesting stories from different periods of time to illustrate the general role of intelligence across history.  

All of these approaches are legitimate given the topic and the time constraints.  I wanted, however, to try to make the history of intel a bit more manageable for students new to the discipline.  I hit on an approach that makes sense to me and seemed to work well with the students.  I call it the Four Ages Of Intelligence.

The first age I call the Age of Concentration.  In ancient times, power and knowledge was concentrated in the hands of a relatively small number of people.  The king or queen, their generals, and the small number of officers and courtiers who could read or write were typically both the originators and targets of intelligence efforts.  These efforts, in turn, were often guided by the most senior people in a government.  Sun Tzu noted, "Hence it is that which none in the whole army are more intimate relations to be maintained than with spies."  George Washington, as well, was famous not only as a general but also as a spymaster.  

The Age of Concentration lasted, in my mind, from earliest times to about the early 1800's.  The nature of warfare began to change rapidly after the American and French Revolutions. 
Washington and the capture of the Hessians at Trenton.  
Large citizen armies and significant technological advances (railroads, telegraphs, photography, balloons!) made the process of running spy rings and collating and analyzing the information they collected too large for any one person or even a small group of people to manage.  


Enter the Age of Professionalization.  The 1800's saw the rise of the staff system and the modern civil service to help generals and leaders manage all the things these more modern militaries and governments had to do.  Of course, there had always been courtiers and others to do the king's business but now there was a need for a large number of professionals to deal with the ever-growing complexities of society.  The need for more professionals, in turn, demanded standardized processes that could be taught.  

For me, the Age of Professionalization lasted until the end of World War II when the Age of Institutionalization began.  Governments, particularly the US Government, began to see the need for permanent and relatively large intelligence organizations as a fundamental part of government.   
Logos of the CIA And KGB
Staffs and budgets grew.  Many organizations came (more or less) out of the shadows.  CIA, KGB, MI5 (and 6), ISI, and MSS all became well known abbreviations for intelligence agencies.  The need for intelligence-like collection and analysis of information became obvious in other areas.  Law enforcement agencies, businesses, and even international organizations started to develop "intelligence units" within their organizational structures.  


All of this lasted until about 1994 when, with the advent of the World Wide Web, the Age of Democratization began.   Seven years ago (!), I wrote an article called "Top Five Things Only Spies Used To Do But Everyone Does Now."  I talked about a whole bunch of things, like using sophisticated ciphers to encrypt data and examining detailed satellite photos, that used to be the purview of spies and spies alone.  Since then, it has only gotten worse.  Massive internet based deception operations and the rise of deepfake technology is turning us all into spymasters, weighing and sorting information wheat from information chaff.  Not only the threats but also the opportunities have grown exponentially.   For savvy users, there is also more good information, a greater ability to connect and learn, to understand the things that are critical to their success or failure but are outside their control, than ever before--and to do this on a personal rather than institutional level.

There are a couple of additional teaching points worth making here.  First is the role of information technology in all of this.  As the technology for communicating and coordinating activities has improved, the intelligence task has become more and more complicated.  This, in turn, has required the use of more and more people to manage the process, and that has changed how the process is done.  Other disciplines have been forced to evolve in the face of technological change.  It is no surprise, then, that intelligence is also subject to similar evolutionary pressures.

It is also noteworthy, however, that the various ages of intelligence have tended to become shorter with the near-logarithmic growth in technological capabilities.  In fact, when you map the length of the four ages on a logarithmic scale (see below) and draw a trendline, you can see a pretty good fit.  It also appears that the length of the current age, the Age of Democratization, might be a bit past its sell-by date.  This, of course, begs the question:  What age comes next?  I'm voting for the Age of Anarchy...and I am only half kidding.


Is this a perfect way of thinking about the history of intelligence?  No, of course not.  There are many, many exceptions to these broad patterns that I see.  Still, in a survey class, with limited time to cover the topic, I think focusing on these broad patterns that seemed to dominate makes some sense.  

Friday, May 10, 2019

I Am Leaving Mercyhurst And...

Look, Ma!  An (almost) clean desk!
... joining the US Army War College!

It has been an honor and a privilege to work with the faculty here in the Intelligence Studies Department at Mercyhurst over the last 16 years.  Having the opportunity to help build a world class program is an experience I will never forget.

As important as my colleagues, however, are the extraordinary students I have had the pleasure to teach and work with.  Whether we were sweating in the halls of the old Wayne Street building or livin' large in our fancy, new digs in the Center for Academic Engagement, getting to work with really smart, super dedicated students was probably the best thing about the job.  Watching them continue to grow and succeed as alumni is even more rewarding.  I am convinced that, one day, the DNI will almost certainly be a Mercyhurst alum (Several Directors of Strategic Intelligence for some Fortune 500 companies already are).

As much as I am sorry to leave Mercyhurst, I am very excited about my next position as Professor Of Strategic Futures at the War College.  There are few missions as important as developing strategic leaders and ideas for the US Army and I am proud to be part of the effort.

I expect to be out of my office here by the end of the month, so, if you have any last minute business to attend to, please reach out soon.  After the end of the month, the best way to reach me until I get to Carlisle in July is via gmail (kris dot wheaton at gmail dot com).  Once I have new contact info, I will post it.

I fully expect to continue to publish new thoughts, articles, and anything interesting I run across here on Sources and Methods.  In fact, I expect to be able to write more often.  

Stay tuned!  It's about to get (more) interesting...

Tuesday, March 19, 2019

What's The Relationship Of An Organization's Goals And Resources To The Type Of Intelligence It Needs?

"Don't blame me, blame this!"
I was trying to find some space on the whiteboard in my office and it occurred to me that I really needed to do something with some of these thoughts.

One of the most interesting (to me, at least) had to do with the relationship between an organization's goals and its resources coupled with the notion of tactical, operational and strategic intelligence.

There is probably not an entry level course in intelligence anywhere in the world that does not cover the idea of tactical, operational and strategic intelligence.  Diane Chido and I have argued elsewhere that these three categories should be defined by the resources that an organization risks when making a decision associated with the intel.  In other words, decisions that risk few of an organization's resources are tactical while those that risk many of the organizations's resources are strategic.  Thus, within this context, the nature of the intelligence support should reflect the nature of the decision and the defining characteristic of the decision is the amount of the organization's resources potentially at risk.   

That all seemed well and good, but it seemed to me to be missing something.  Finally (Diane and I wrote our article in 2007, so you can draw your own conclusions...), it hit me!  The model needed to also take into consideration the potential impact on the goals and purposes of the organization.

Here's the handy chart that (hopefully) explains what I mean:


What I realized is that the model that Diane and I had proposed had an assumption embedded in it.  In short, we were assuming that the decisionmaker would understand the relationship between their eventual decision, the resources of the organization, and the impact the decision would have on the organization's goals.  

While there are good reasons to make this assumption (decisionmakers are supposed to make these kinds of calculations, not intel), it is clearly not always the case.  Furthermore, adding this extra bit of nuance to the model makes it more complete.

Let's take a look at some examples.  If the impact on resources of deciding to pursue a particular course of action is low but the pay-off is high, that's a no-brainer (Example:  You don't need the DIRNSA to tell you to have a hard-to-crack password).  Of course you are going to try it!  Even if you fail, it will have cost you little.  Likewise, if the impact on resources is high and the impact on goals is low, then doing whatever it is you are about to do is likely stupid (Example:  Pretty much the whole damn Franklin-Nashville Campaign).

While many of these elements may only be obvious after the fact, to the extent that these kinds of things are observable before the decision is made, reflecting on them may well help both intelligence professionals and decisionmakers understand what is needed of them when confronted by a particular problem.  

Wednesday, October 10, 2018

6 Things To Think About While Discussing Requirements With A Decisionmaker (Part 5 and 6)

"Jeeves, I am fairly certain that is not what Prof. Wheaton had in mind
when he said we need to constrain the requirement."
How can I use the limited amount of time my decisionmakers have to discuss their intelligence requirements to get the maximum return on that investment?  Earlier this summer, I began a series on this precise theme.

I have already written about how to prepare for an intelligence requirements meeting and about how to deal with a virtual intelligence requirements environment.

You can also see the first four articles on things to think about when having a requirements meeting with a decisionmaker (DM) at the links below:

1.  Does the DM really want intelligence?
2.  What kind of intelligence is the DM looking for?
3. What are the DM's assumptions?
Today, I am writing part five and six of this six part epic discussing what intel professionals need to think about when they are actually in the meeting, talking to a decisionmaker about his or her requirements.

5. What constraints is the DM willing to put on the requirement?

I once had a DM who was looking to expand his local business and asked for a nationwide study.  His business was based on serving local customers and he did not have the resources to go nationwide and yet...  

Decisionmakers are notoriously reluctant to put constraints on requirements.  They worry that, if they do, just on the other side of whatever bright line they think they have drawn, there will be a perfect customer for their business, a critical fact that lets them make a foolproof plan to defeat the enemy, or the key piece of info that solves all their problems.  I call this the "pot of gold" syndrome and it afflicts every decisionmaker.  


This worry, of course, blinds these same decisionmakers to the inevitable problem this approach causes:  Given the constant limitations on time and resources, trying to look everywhere makes it difficult for the intelligence unit to look anywhere in depth.  Knowing the areas that are of genuine interest and can genuinely support the decisionmaker helps get the most out of the intelligence effort.  Likewise, knowing where you don't need to look is equally helpful.


There are at least six different kinds of constraints that intelligence professionals need to address when engaged in a discussion about requirements:
  • Geography.  What are the limits to how far we need to look?  Where can we draw lines on our maps?  Geography is used loosely here, by the way.  Understanding and constraining the "market landscape" or the "cyber landscape", for example, also fall within this guidance.
  • Time.  How far forward do you want us to look?  Every problem has a "predictive horizon" beyond which it is hard to see.  Moreover, you will likely see a good bit more detail with a good bit more confidence if you are looking one month out instead of 10 years out.
  • Organizational units.  At what level does the DM want the analysis?  Am I looking at industries, companies or departments within companies?  Countries, regions, or continents?  
  • Processes, functions.  Are there certain processes or functions of the target that the DM cares more about than others?  Are there processes or functions that we could ignore?  For example, imagine a company that doesn't care how its competitor manages its HR but really wants to know about its supply chain.
  • People.  Which people in the target organization are most important to the DM (if any)?  Are we looking at the government of a country or the president of a country?  A competitor or the CEO of that competitor?  Obviously, "both!" might be right answer but asking the question makes it clear to both the DM and the intel unit.
  • Money.  Are there amounts of money about which we do not care?  Do you want me to try to look at every drug transaction or just the large ones?  Is every act of bribery, no matter how trivial, really worth spending the time and energy on in a study of a country's level of corruption?  Again, the answer in both cases may be "yes!" but without asking, the intel unit runs the risk of failing to provide the level of analysis the DM wants and will almost inevitably waste time analyzing issues that the DM cares little about.
6. What are the DM's priorities?

In any sort of robust requirements discussion, it is normal for many more requirements to emerge than the intelligence unit can handle.   Rather than complain about all the work, a better way to handle this is to get the DM to state his/her priorities.  

I have worked with hundreds of DMs and all of them understand resource constraints.  Even with quality intel analysis, I have often seen teams disappoint a DM when they have to say, "We didn't have time/money/people to get to all of your requirements."  I have never, however, seen a DM disappointed when that team can say, "We didn't have time/money/people to get to all of your requirements, but we were able to address your top 5 (or 10 or whatever) requirements."

The key to being able to address the top priorities, however, is knowing what they are.  As with all constraints, DMs are typically hesitant to prioritize their questions.  They may feel that they do not know enough to do so.  They may also be worried that the intelligence unit will put on blinders such that they will only look at the priorities and forget to keep an eye out for unexpected threats and opportunities.  

One of the keys here is to not make assumptions about priorities.  Even if the DM sends the team a numbered list, it makes sense to go back and ask, "Are these in priority order?"  Almost every time I have asked that question - forcing the DM to actively think about their priorities - I get changes to the order.  Likewise, just because a DM talks a lot about a certain issue, do not assume that it is the top priority.  It may just be the most recent thing that has come up or a new idea that the DM just had.  Asking, "We have talked about X quite a bit.  Is this where you would like us to focus?" is still important.

Priorities are an enormously powerful tool for an intelligence unit. They allow the unit to focus and to make tough decisions about what is relevant and what is not.  Don't leave your requirements meeting without them!

Next:  Four Things You Must Do After A Requirements Meeting

Wednesday, August 15, 2018

6 Things To Think About While Discussing Requirements With A Decisionmaker (Part 4)

"Hic sunt dracones!"
How can I use the limited amount of time my decisionmakers have to discuss their intelligence requirements to get the maximum return on that investment?  Earlier this summer, I began a series on this precise theme.

I have already written about how to prepare for an intelligence requirements meeting and about how to deal with a virtual intelligence requirements environment.  Last week, I did the first three things to think about when having a requirements meeting with a DM:
1.  Does the DM really want intelligence?2.  What kind of intelligence is the DM looking for?3. What are the DM's assumptions?
Today, I am writing part four of a six part series discussing what intel professionals need to think about when they are actually in the meeting, talking to a decisionmaker about his or her requirements.

4.  What does the DM mean when he/she/they say "x"?

"I'm worried about Europe.  What moves are our competitors likely to make next?"  This is a perfectly reasonable request from a decisionmaker.  In fact, if you are in a competitive intelligence position for a larger corporation, you have likely heard something close to it.  

While reasonable, it is the kind of requirements statement that is filled with dragons for the unwary.  Not the least of these dragons is definitional.   When the DM said "competitors" did he or she mean competitors that reside in Europe or competitors that sell in Europe or both?  And what did he or she mean by "Europe"?  Continental Europe, the EU, western Europe, something else?

Listening carefully for these common words that are actually being used in very specific ways or are, in a particular organization, technical terms is a critical aspect of a successful requirements meeting.  If the intelligence professional has a long history with a particular decisionmaker then these terms of art may be common knowledge.  Even in this case, however, it is worth confirming with the DM that everyone shares this understanding of these kinds of words.

That is why I consider it best practice to memorialize the requirement in writing after the meeting and to include (usually by way of footnote) any terms defined in the meeting.  In addition, if certain terms weren't defined in the meeting but the intel professional feels the need to define them afterwards, I think it makes sense for the intel professional to make their best guess at what the DM meant but then draw specific attention to the intel professional's tentative definition of the term in question and to seek confirmation of that definition with the DM.  

This may sound like a convoluted process, but, as I tell my students, not getting the requirement right is like building a house on the wrong piece of property.  It doesn't matter how beautiful or elegant it is, if you build it on the wrong piece of property you will still have to tear it down and start all over again.  The same holds true for a misunderstood intelligence requirement.  Get the requirement wrong and it doesn't matter how good your answer is - you answered the wrong question!

Next:  #5 What constraints are the DMs willing to put on the requirement?

Wednesday, August 8, 2018

6 Things To Think About While Discussing Requirements With A Decisionmaker (Part 3)

"I challenge your assumptions, sir!"
How can I use the limited amount of time my decisionmakers have to discuss their intelligence requirements to get the maximum return on that investment?  Earlier this summer, I began a series on this precise theme.

I have already written about how to prepare for an intelligence requirements meeting and about how to deal with a virtual intelligence requirements environment.  Today, I am writing part three of a six part series discussing what intel professionals need to think about when they are actually in the meeting, talking to a decisionmaker about his or her requirements.

3. What are the DM's assumptions?

There are three kinds of assumptions intelligence professionals need to watch for in their DMs when discussing requirements:
  • About the requirement
  • About the answer to the requirement
  • About the intel team
Consider this requirement:  "Will the Chinese provide the equipment for the expansion of mobile cellphone services into rural Ghana?"  The DM is clearly assuming that there is going to be an expansion of cellphone services.  That doesn't make it a bad requirement but analysts should start by checking this assumption.  

Note also that the DM did not frame the question as "Who is going to provide the equipment...".  Rather, he or she highlighted the potential role of the Chinese.  This kind of framing suggests that the DM thinks he or she already knows the answer to the requirement but just wants a "double check".  Other interpretations are possible, of course, but it is worth noting if only so the intelligence professionals working the issue don't approach the problem with blinders on.

Finally, it is also important to think about the assumptions the DM has about the team working on the requirement.  What does the DM see when he or she looks out at our team?  Are we all young and eager?  Old and grizzled?  Does our reputation - good or bad - precede us?  Finally, is the DM asking the "real" requirement or just what he or she thinks the team can handle?  Not getting at the real questions the DM needs answered is a recipe for failure or, at least, the perception of failure, which is probably worse..

Next Week:  #4 What does the DM mean when he/she/they say "x"?

Tuesday, August 7, 2018

6 Things To Think About While Discussing Requirements With A Decisionmaker (Part 2)

"And what kind of intelligence would the gentleman prefer today?"
How can I use the limited amount of time my decisionmakers have to discuss their intelligence requirements to get the maximum return on that investment?  Earlier this summer, I began a series on this precise theme.

I have already written about how to prepare for an intelligence requirements meeting and about how to deal with a virtual intelligence requirements environment.  Today, I am writing part two of a six part series discussing what intel professionals need to think about when they are actually in the meeting, talking to a decisionmaker about his or her requirements.

2.  What kind of intelligence is the DM looking for?

There are two broad (and informal) categories of intelligence - descriptive and estimative.  Descriptive intelligence is about explaining something that is relevant to the decision at hand.  Estimative intelligence is about what that "something" is likely to do next.  It is the difference between "Who is the president of Burkina Faso now?" and "Who is the next president of Burkina Faso likely to be?"

Estimative intelligence is obviously more valuable than descriptive intelligence.  Estimative intelligence allows the DM and his or her operational staff to plan for the future, to be proactive instead of reactive.  Surprisingly, though, DMs often forget to ask for estimates regarding issues they think will be relevant to their decisions.  It is worth the intelligence professionals time, therefore, to look for places where an estimate might be useful and suggest it as an additional requirement.

While I am never one to look for more work, the truth is that descriptive intelligence is becoming easier and easier to find.  The real value in having dedicated intel staff is in that staff's ability to make estimates.  If all you do is what computers do well (IE describe) then you run the risk of being downsized or eliminated the next time there is a budget crunch.

Tomorrow:  #3 What are the DM's assumptions?

Monday, August 6, 2018

6 Things To Think About While Discussing Requirements With A Decisionmaker

An intel professional successfully gets everything he needs from a
DM in a requirements briefing.  Guess which one is the unicorn...
How can I use the limited amount of time my decisionmakers have to discuss their intelligence requirements to get the maximum return on that investment?  Earlier this summer, I began a series on this precise theme.

I have already written about how to prepare for an intelligence requirements meeting and about how to deal with a virtual intelligence requirements environment.  Today, I am writing part one of a six part series discussing what intel professionals need to think about when they are actually in the meeting, talking to a decisionmaker about his or her requirements.

1.  Does the DM really want intelligence?

It goes without saying that an organization's mission is going to drive its intel requirements.  Whether the goal is to launch a new product line or take the next hill, decisionmakers need intel to help them think through the problem.

Unfortunately, DMs often conflate operational concerns ("What are we going to do?" kinds of questions) with intel concerns ("What is the other guy going to do?" kinds of questions).  This is particularly true in a business environment where intelligence as a distinct function of business is a relatively new concept.

Good intelligence requirements are typically about something which is important to an organization's success or failure but which is also outside that organization's control.  Good intelligence requirements are, in short, about the "other guy" - the enemy, the competitor, the criminal - or, at least, about the external environment.

Intelligence professionals need to be able to extract intelligence requirements from this broader conversation, play them back to the DM to confirm that both parties understand what needs to be done before they go to work.

Tomorrow:  #2 What kind of intelligence is the DM looking for?

Thursday, July 19, 2018

How To Write A Mindnumbingly Dogmatic (But Surprisingly Effective) Estimate (Part 2 - Nuance)

In my last post on this topic, I outlined what I considered to be a pretty good formula for a pretty good estimate:

  • Good WEP +
  • Nuance +
  • Due to's +
  • Despite's +
  • Statement of AC = 
  • Good estimate!
I also talked about the difference between good WEPs, bad WEPs and best WEPs and if you are interested in all that, go back and read it.  What I intend to talk about today is the idea of nuance in an estimate.

Outline of the series so far (Click for full page version)
Let me give you an example of what I mean:  
  • The GDP of Yougaria is likely to grow.
  • The GDP of Yougaria is likely to grow by 3-4% over the next 12 months.
Both of these are estimates and both of these use good WEPs but one is obviously better than the other.  Why?  Nuance.

Mercyhurst Alum Mike Lyden made a stab at defining what we mean by "nuance" in his 2007 thesis, The Efficacy of Accelerated Analysis in Strategic Level Intelligence Estimates.  There he defined it as how many of the basic journalistic questions (Who, What, When, Why, Where, and How) the estimate addressed.  

For example, Mike would likely give the first estimate above a nuance score of 1.  It really only answers the "What" question.  I think he would give the second estimate a 3 as it appears to answer not only the "What" question but also the "When" and "How (or how much)" questions as well.  Its not a perfect system but it makes the point.

In general, I think it is obvious that more nuance is better than less.  A more nuanced estimate is more likely to be useful and it is less likely to be misinterpreted.  There are some issues that crop up and need to be addressed, however - nuances to the nuance rule, if you will.
  • What if I don't have the evidence to support a more nuanced estimate?  Look at the second estimate above.  What if you had information to support a growing economy but not enough information (or too much uncertainty in the information you did have) to make an estimate regarding the size and time frame for that growth?  I get it.  You wouldn't feel comfortable putting numbers and dates to this growth.  What would you feel comfortable with?  Would you be more comfortable with an adverb ("grow moderately")?  Would you be more comfortable with a date range ("over the next 6 to 18 months")?  Is there a way to add more nuance in any form with which you can still be comfortable as an analyst?  The cardinal rule here is to not add anything that you can't support with facts and analysis - that you are not willing to personally stand behind.  If, in the end, all you are comfortable with is "The economy is likely to grow" then say that.  I think, however, if you ponder it for a while, you may be able to come up with another formulation that addresses the decisionmaker's need for nuance and your need to be comfortable with your analysis.
  • What if the requirement does not demand a nuanced estimate?  What if all the decisionmaker needed to know was whether the economy of Yougaria was likely to grow?  He/She doesn't need to know any more to make his/her decision.  In fact, spending time and effort to add nuance would actually be counterproductive.  In this case, there is no need to add nuance.  Answer the question and move on.  That said, my experience suggests that this condition is rather more rare than not.  Even when DMs say they just need a "simple" answer, they often actually needs something, well, more nuanced.  Whether this is the case or not is something that should be worked out in the requirements process.  I am currently writing a three part series on this and you can find Part 1 here and Part 2 here.  Part 3 will have to wait until a little later in the summer.
  • What if all this nuance makes my estimate sound clunky?  So, yeah.  An estimate with six clauses in it is going to be technically accurate and very nuanced but sound as clunky and awkward as a sentence can sound.  Well-written estimates fall at the intersection of good estimative practice and good grammar.  You can't sacrifice either, which is why they can be very hard to craft.  The solution is, of course, to either refine your single estimative sentence or to break up the estimative sentence into several sentences.  In my next post on this, where I will talk about "due to's and "despite's", I will give you a little analytic sleight of hand that can help you with this problem.