- B=the Business Case for the tech. This is how someone can make money off the tech. Most R and D is funded by companies these days (this was not always the case). These companies are much more likely to fund techs that can contribute to a revenue stream. This doesn't mean that a tech without an obvious business case can't get developed and funded, it just makes it harder.
- P=Political/Cultural/Social issues with a tech. A tech might be really cool and have an excellent business case, but because it crosses some political or social line, it either goes nowhere or accelerates much more quickly than it might normally. Three examples:
- We were looking at 3G adoption in a country early in the 2000's. There were lots of good reasons to suspect that it was going to happen, until we learned that the President's brother owned the 2G network already in existence in the country. He was able to use his family connections to keep competition out of the country.
- A social factor that delayed adoption of a tech is the story of Google Glass in 2013. Privacy concerns driven by the possibility of videos taken without consent led to users being called "Glassholes." Coupled with other performance issues, this led to the discontinuation of the original product (though it lives on in Google's attempts to enter the augmented reality market).
- Likewise, these social or cultural issues can positively impact tech trends as well. For example, we have all had to become experts at virtual communication almost overnight due to the COVID crisis--whether we wanted to or not.
- R=Regulatory/Legal issues with the tech. The best example I can think of here is electromagnetic spectrum management. Certain parts of the electromagnetic spectrum have been allocated to certain uses. If your tech can only work in a part of the spectrum owned by someone else, you're out of luck. Some of this "regulation" is not government sponsored either. The Institute of Electrical and Electronics Engineers establishes common standards for most devices in the world, for example. For example, your wifi router can connect to any wifi enabled devices because they all use the IEEE's 802.11 standard for wifi. Other regulations come from the Federal Communications Commission and the International Telecommunications Union.
- T=The tech itself. This is where most people spend most of their time when they study tech trends. It IS important to understand the strengths and weaknesses of a particular technology, but as discussed above, it might not be as important as other environmental factors in the eventual adoption (or non-adoption...) of a tech. That said, there are a couple of good sources of info that can allow you to quickly triangulate on the strengths and weaknesses of a particular tech:
- Wikipedia. Articles are typically written from a neutral point of view and often contain numerous links to other, more authoritative sources. It is not a bad place to start your research on a tech.
- Another good place is Gartner, particularly the Gartner Hype Cycle. I'll let you read the article at the link but "Gartner Hype Cycle 'insert name of tech here'" is almost always a useful search string (Here's what you get for AI for example...).
- Likewise, you should keep your eye out for articles about "grand challenges" in a particular tech (Here is one about grand challenges in robotics as an example). Grand Challenges outline the 5-15 big things the community of interest surrounding the tech have to figure out to take the next steps forward.
- Likewise, keep your eyes out for "roadmaps." These can be either informal or formal (like this one from NASA on Robotics and autonomous systems). The roadmaps and the lists of grand challenges should have some overlap, but they are often presented in slightly different ways.
Wednesday, December 9, 2020
The BPRT Heuristic: Or How To Think About Tech Trends
Posted by
Kristan J. Wheaton
at
11:31 AM
0
comments
Labels: analysis, analytic methods, futures
Thursday, January 2, 2020
How To Think About The Future: A Graphic Prologue
(Note: I have been writing bits and pieces of "How To Think About the Future" for some time now and publishing those bits and pieces here for early comments and feedback. As I have been talking to people about it, it has become clear that there is a fundamental question that needs to be answered first: Why learn to think about the future?
Most people don't really understand that thinking about the future is a skill that can be learned--and can be improved upon with practice. More importantly, if you are making strategic decisions, decisions about things that are well outside your experience, or decisions under extreme uncertainty, being skilled at thinking about the future can significantly improve the quality of those decisions. Finally, being able to think effectively about the future allows you to better communicate your thoughts to others. You don't come across as someone who "is just guessing."
I wanted to make this case visually (mostly just to try something new). Randall Munroe (XKCD) and Jessica Hagy (Indexed) both do it much better of course, but a tip of the hat to them for inspiring the style below. It is a very long post, but it is a quick read; just keep scrolling!
As always, thanks for reading! I am very interested in your thoughts on this...)
Posted by
Kristan J. Wheaton
at
10:42 AM
4
comments
Monday, November 18, 2019
Chapter 2: In Which The Brilliant Hypothesis Is Confounded By Damnable Data
"Stop it, Barsdale! You're introducing confounds into my experiment!" |
- "It's a low confidence estimate, but the Patriots are very likely to win this week."
- "The Patriots are very likely to win this week. This is a low confidence estimate, however."
At first glance, the results appear to be less than robust. The difference measured here is unlikely to be statistically significant. Even if it is, the effect size does not appear to be that large. The one thing that seems clear is that there is no clear preference.
Or is there?
Just like every PHD candidate who ever got disappointing results from an experiment, I have spent the last several weeks trying to rationalize the results away--to find some damn lipstick and get it on this pig!
I think I finally found something which soothes my aching ego a bit. The fundamental assumption of these kinds of survey questions is that, in theory, both answers are equally likely. Indeed, this sort of A/B testing is done precisely because the asker does not know which one the client/customer/etc. will prefer.
This assumption might not hold in this case. Statements of analytic confidence are, in my experience, rare in any kind of estimative work (although they have become a bit more common in recent years). When they are included, however, they are almost always included at the end of the estimate. Indeed, one of those who took the survey (and preferred the first statement above) commented that putting the statement of analytic confidence at the end, "is actually how it would be presented in most IC agencies, but whipsaws the reader."
How might the comfort of this familiarity change the results? On the one hand, I have no knowledge of who took my survey (though most of my readers seem to be at least acquainted in passing with intelligence and estimates). On the other hand, there is some pretty good evidence (and some common sense thinking) that documents the power of the familiarity heuristic, or our preference for the familiar over the unfamiliar. In experiments, the kind of thing that can throw your results off is known as a confound.
More important than familiarity with where the statement of analytic confidence traditionally goes in an estimate, however, might be another rule of estimative writing and another confound: BLUF.
Bottomline Up Front (or BLUF) style writing is a staple of virtually every course on estimative or analytic writing. "Answer the question and answer it in the first sentence" is something that is drummed into most analysts' heads from birth (or shortly thereafter). Indeed, the single most common type of comment from those that preferred the version with the statement of analytic confidence at the end was, as this one survey taker said, "You asked about the Patriots winning - the...response mentions the Patriots - the topic - within the first few words."
Note: Ellipses seem important these days and the ones in the sentence above mark where I took out the word "first." I randomized the two statements in the survey so that they did not always come up in the same order. Thus, this particular responder saw the second statement above (the one with the statement of analytic confidence at the end) first.If the base rate of the two answers is not 50-50 but rather 40-60 (or worse in favor of the more familiar, more BLUFy answer) then these results could easily become very significant. It would be like winning a football game you were expected to lose by 35 points!
Thus, like all good dissertations, the only real conclusion I have come to is that the "topic needs more study."
Joking aside, it is an important topic. As you likely know, it is not enough to just make an estimate. It is also important to include a statement of analytic confidence. To do anything less in formal estimates is to be intellectually dishonest to whoever is making real decisions based on your analysis. I don't think that anyone would disagree that form can have a significant impact on how the content is received. The real questions are how does form impact content and to what degree? Getting at those questions in the all important area of formal estimative writing is truly something well-worth additional study.
Posted by
Kristan J. Wheaton
at
10:00 AM
2
comments
Labels: analysis, analytic confidence, futures
Monday, August 26, 2019
How To Think About The Future (Part 3--Why Are Questions About Things Outside Your Control So Difficult?)
I am writing a series of posts about how to think about the future. In case you missed the first two parts, you can find them here:
Part 1--Questions About Questions
Part 2--What Do You Control
These posts represent my own views and do not represent the official policy or positions of the US Army or the War College, where I currently work.
*******************
Former Director of the CIA, Mike Hayden, likes to tell this story:
"Some months ago, I met with a small group of investment bankers and one of them asked me, 'On a scale of 1 to 10, how good is our intelligence today?'" recalled Hayden. "I said the first thing to understand is that anything above 7 isn't on our scale. If we're at 8, 9, or 10, we're not in the realm of intelligence—no one is asking us the questions that can yield such confidence. We only get the hard sliders on the corner of the plate. Our profession deals with subjects that are inherently ambiguous, and often deliberately hidden. Even when we're at the top of our game, we can offer policymakers insight, we can provide context, and we can give them a clearer picture of the issue at hand, but we cannot claim certainty for our judgments." (Italics mine)I think it is important to note that the main reason Director Hayden cited for the Agency's "batting average" was not politics or funding or even a hostile operating environment. No. The #1 reason was the difficulty of the questions.
Understanding why some questions are more difficult than others is incredibly important. Difficult questions typically demand more resources--and have more consequences. What makes it particularly interesting is that we all have an innate sense of when a question is difficult and when it is not, but we don't really understand why. I have written about this elsewhere (here and here and here, for example), and may have become a bit like the man in the "What makes soup, soup?" video below...
No one, however, to my knowledge, has solved the problem of reliably categorizing questions by difficulty.
I have a hypothesis, however.
I think that the AI guys might have taken a big step towards cracking the code. When I first heard about how AI researchers categorize AI tasks by difficulty, I thought there might be some useful thinking there. That was way back in 2011, though. As I went looking for updates for this series of posts, I got really excited. There has been a ton of good work done in this area (no surprise there), and I think that Russel and Norvig in their book, Artificial Intelligence: A Modern Approach, may have gotten even closer to what is, essentially, a working definition of question difficulty.
Let me be clear here. The AI community did not set out to figure out why some questions are more difficult than others. They were looking to categorize AI tasks by difficulty. My sense, however, is that, in so doing, they have inadvertently shown a light on the more general question of question difficulty. Here is the list of eight criteria they use to categorize task environments (the interpretation of their thinking in terms of questions is mine):
- Fully observable vs. partially observable -- Questions about things that are hidden (or partially hidden) are more difficult than questions about things that are not.
- Single agent vs. multi-agent -- Questions about things involving multiple people or organizations are more difficult than questions about a single person or organization.
- Competitive vs. cooperative -- If someone is trying to stop you from getting an answer or is going to take the time to try to lead you to the wrong answer, it is a more difficult question. Questions about enemies are inherently harder to answer than questions about allies.
- Deterministic vs. stochastic -- Is it a question about something with fairly well-defined rules (like many engineering questions) or is it a question with a large degree of uncertainty in it (like questions about the feelings of a particular audience)? How much randomness is in the environment?
- Episodic vs. sequential -- Questions about things that happen over time are more difficult than questions about things that happen once.
- Static vs. dynamic -- It is easier to answer questions about places where nothing moves than it is to answer questions about places where everything is moving.
- Discrete vs. continuous -- Spaces that have boundaries, even notional or technical ones, make for easier questions than unbounded, "open world," spaces.
- Known vs. unknown -- Questions where you don't know how anything works are much more difficult than questions where you have a pretty good sense of how things work.
Posted by
Kristan J. Wheaton
at
8:17 AM
0
comments
Friday, August 16, 2019
How To Think About The Future (Part 2 - What Do You Control?)
Click on the image above to see the full mindmap. |
I am writing a series of posts about how to think about the future. In case you missed Part 1, you can find it here:
How To Think About The Future (Part 1 -- Questions About Questions)
These posts represent my own views and do not represent the official policy or positions of the US Army or the War College, where I currently work.
****************
The great Stoic philosopher Epictetus wrote,
"Work, therefore to be able to say to every harsh appearance, 'You are but an appearance, and not absolutely the thing you appear to be.' And then examine it by those rules which you have, and first, and chiefly, by this: whether it concerns the things which are in our own control, or those which are not; and, if it concerns anything not in our control, be prepared to say that it is nothing to you." (Italics mine)There are good reasons to focus on questions about things you control. Things you control you can understand or, at least, the data required to understand them is much easier to get. Things you control you can also change (or change more easily). Finally, you only get credit for the things you do with the things you control. Few people get credit for just watching.
Whole disciplines have been built around improving what you do with what you control. MBA and Operations Research programs are both good examples of fields of study that focus mostly on improving decisions about how you use the resources under your control. Indeed, focusing on the things you control is at the center of effectual reasoning, an exciting new take on entrepreneurship and innovation (for example, the entire crowdfunding/startup Quickstarter Project was built on the effectuation principles and are the reason it was as successful as it was).
On the other hand, another great thinker from the ancient world once wrote,
"If you know the enemy and know yourself, you need not fear the result of a hundred battles." Sun Tzu, The Art Of WarSun Tzu went on to outline the exact impact of not thinking about things you don't control:
"If you know yourself but not the enemy, for every victory gained you will also suffer a defeat."Things outside of your control are much more squishy than things under your control. The data is often incomplete, and what is there is often unclear. It is pretty normal for the info to be, as Clausewitz would say, "of doubtful character," and it is rarely structured in nice neat rows with data points helpfully organized with labelled columns. Finally, in an adversarial environment at least, you have to assume that at least some of the info you do have is deceptive--that it has been put there intentionally by your enemy or competitor to put you off the track.
People frequently run from questions about things that are outside of their control. The nature of the info available can often make these kinds of questions seem unresolvable, that no amount of thinking can lead to any greater clarity.
This is a mistake.
Inevitably, in order to move forward with the things you do control, you have to come to some conclusions about the things you do not control. A country's military looks very different if it expects the enemy to attack by sea vs. by land. A company's marketing plan looks very different if it thinks its competitor will be first to market with a new type of product or if it will not. Your negotiating strategy with a potential buyer of your house depends very much on whether you think the market in your area is hot or not.
The US military has a saying: "Intelligence leads operations." This is a shorthand way of driving home the point that your understanding of your environment, of what is happening around you, of the things outside of your control, determines what you do with the things under your control. Whether you do this analysis in a structured, formal way or just go with your gut instinct, you always come to conclusions about your environment, about the things outside your control, before you act.
Since you are going to do it anyway, wouldn't it be nice if there were some skills and tools you could learn to do it better? It turns out that there are. The last 20-30 years has seen an explosion in research about how to better understand the future for those things outside of our control.
More importantly, learning these skills and tools can probably help you understand things under your control better as well. Things under your control often come with the same kinds of squishy data normally associated with things outside your control. The opposite is much less likely to be true.
Much of the rest of this series will focus on these tools and thinking skills, but first, we need to dig more deeply into the nature of the questions we ask about things outside our control and precisely why those questions are so difficult to answer.
(Next: Why Are Questions About Things Outside Your Control So Difficult?)
Posted by
Kristan J. Wheaton
at
7:54 AM
4
comments
Tuesday, July 30, 2019
How To Think About The Future (Part 1 -- Questions About Questions)
We don't think about the future; we worry about it.
Whether it's killer robots or social media or zero-day exploits, we love to rub our preferred, future-infused worry stone between our thumb and finger until it is either a thing of shining beauty or the death of us all (and sometimes both).
This is not a useful approach.
Worry is the antithesis of thinking. Worry is all about jumping to the first and usually the worst possible conclusion. It induces stress. It narrows your focus. It shuts down the very faculties you need to think through a problem. Worry starts with answers; thinking begins with questions.
What Are Your Questions?
“A prudent question is one-half of wisdom.” – Francis BaconGiven the importance of questions (and of asking the "right" ones), you would think that there would be more literature on the subject. In fact, the question of questions is, in my experience, one of the great understudied areas. A few years ago, Brian Manning and I took a stab at it and only managed to uncover how little we really know about how to think about, create, and evaluate questions.
"The art of proposing a question must be held of higher value than solving it.” – Georg Cantor
“If you do not know how to ask the right question, you discover nothing.” – W. Edwards Deming
For purposes of thinking about the future, however, I start with two broad categories to consider: Speculative questions and meaningful questions.
Speculation does not come without risks, however. For example, how many terrorist groups would like to strike inside the US? Let's say 10. How are they planning to do it? Bombs, guns, drones, viruses, nukes? Let's say we can come up with 10 ways they can attack. Where will they strike? One of the ten largest cities in the US? Do the math--you already have 1000 possible combinations of who, what, and where.
How do we start to narrow this down? Without some additional thinking strategies, we likely give in to cognitive biases like vividness and recency to narrow our focus. Other aspects of the way our minds work--like working memory limitations--also get in the way. Pretty soon, our minds, which like to be fast and certain even when they should be neither, have turned our 1 in 1000 possibility into a nice, shiny, new worry stone for us to fret over (and, of course, share on Facebook).
Meaningful questions are questions that are important to you--important to your plans, to your (or your organization's) success or failure. Note that there are two criteria here. First, meaningful questions are important. Second, they are yours. The answers to meaningful questions almost, by definition, have consequences. The answers to these questions tend to compel decisions or, at least, further study.
It is entirely possible, however, to spend a lot of time on questions which are both of dubious relevance to you and are not particularly important. The Brits have a lovely word for this, bikeshedding. It captures our willingness to argue for hours about what color to paint the bikeshed while ignoring much harder and more consequential questions. Bikeshedding, in short, allows us to distract ourselves from our speculations and our worries and feel like we are still getting something done.
Next: What do you control?
Tuesday, June 18, 2019
What Is #COOLINT?
Apollo 11 in Real-Time is the very definition of cool. |
COOLINT is usually reserved for something that is, well, cool but might not be particularly relevant to the question at hand. You want to show COOLINT to other people. You KNOW they will be interested in it. It's the clickbait of the intel world.
A great example of COOLINT is the Apollo 11 In Real-time website (the mobile version is OK but you will want to look at it on your PC or MAC. Trust me). In fact, I used the hashtag "#COOLINT" when I tweeted out this site this morning. The guys who put this amazing site together have mashed up all of the audio and video, all of the commentary, and all of the pictures into a single website that allows you to follow along with the mission from T - 1 minute to splashdown. It doesn't really have anything to do with intelligence, but, to a spacegeek like me, you find the Apollo 11 in Real-time website next to the word "cool" in the dictionary.
I intend to argue here, however, that there is a more formal definition of COOLINT, one that is actually useful in analytic reporting. To do this, I want to first briefly explore the concepts of "relevant" and "interesting"
One of the hallmarks of good intelligence analysis is that it be relevant to the decisionmaker(s) being supported. ICD 203 makes this mandatory for all US national security intel analysts but, even without the regulation, relevance has long been the standard in intel tradecraft.
"Interesting" is a term which gets significantly less attention in intel circles. There is no requirement that good intel be interesting. It is ridiculous to think that good intel should meet the same standards as a good action movie or even a good documentary. That said, if I have two pieces of information that convey the same basic, relevant facts and one is "interesting" and other is not (for example, 500 words of statistical text vs. one chart), I would be a bit of a fool not to use the interesting one. Intel analysts don't just have a responsibility to perform the analysis, they also have a responsibility to communicate it to the decisionmaker they are supporting. "Interesting" is clearly less important than "relevant" but, in order to communicate the analysis effectively, something that has to be considered.
With all this in mind, it is possible to construct a matrix to help an analyst think about the kinds of information they have available and where it all should go in their analytic reports or briefings:
![]() |
"Interesting" vs. "Relevant" in analytic reporting |
Relevant information which is not particularly interesting might have to go in the report--it may be too relevant not to include. However, there are many ways to get this kind of info in the report or brief. Depending on the info's overall importance to the analysis, it might be possible to include it in a footnote, annex, or backup slide instead of cluttering up the main body of the analysis.
Information that is interesting but not relevant is COOLINT. It is that neat little historical anecdote that has nothing to do with the problem, or that very cool image that doesn't really explain anything at all. The temptation to get this stuff into the report or brief is great. I have seen analysts twist themselves into knots to try to get a particular piece of COOLINT into a briefing or report. Don't do it. Put it in a footnote or an annex if you have to, and hope the decisionmaker asks you a question where your answer can start with, "As it so happens..."
Info which is not interesting and not relevant needs to be left out of the report. I hope this goes without saying.
Three caveats to this way of thinking about info. First, I have presented this as if the decision is binary--info is either relevant OR irrelevant, interesting OR uninteresting. That isn't really how it works. It is probably better to think of these terms as if they were on a scale that weighs both criteria. It is possible, in other words, to be "kind of interesting" or "really relevant."
The other caveat is that both the terms interesting and relevant should be defined in terms of the decisionmaker and the intelligence requirement. Relevancy, in other words, is relevancy to the question; "interesting", on the other hand, is about communication. What is interesting to one decisionmaker might not be to another.
Finally, if you use this at all, use it as a rule of thumb, not as a law. There are always exceptions to these kinds of models.
Posted by
Kristan J. Wheaton
at
11:36 AM
0
comments
Labels: analysis, COOLINT, intelligence
Thursday, July 19, 2018
How To Write A Mindnumbingly Dogmatic (But Surprisingly Effective) Estimate (Part 2 - Nuance)
In my last post on this topic, I outlined what I considered to be a pretty good formula for a pretty good estimate:
- Good WEP +
- Nuance +
- Due to's +
- Despite's +
- Statement of AC =
- Good estimate!
![]() |
Outline of the series so far (Click for full page version) |
- The GDP of Yougaria is likely to grow.
- The GDP of Yougaria is likely to grow by 3-4% over the next 12 months.
- What if I don't have the evidence to support a more nuanced estimate? Look at the second estimate above. What if you had information to support a growing economy but not enough information (or too much uncertainty in the information you did have) to make an estimate regarding the size and time frame for that growth? I get it. You wouldn't feel comfortable putting numbers and dates to this growth. What would you feel comfortable with? Would you be more comfortable with an adverb ("grow moderately")? Would you be more comfortable with a date range ("over the next 6 to 18 months")? Is there a way to add more nuance in any form with which you can still be comfortable as an analyst? The cardinal rule here is to not add anything that you can't support with facts and analysis - that you are not willing to personally stand behind. If, in the end, all you are comfortable with is "The economy is likely to grow" then say that. I think, however, if you ponder it for a while, you may be able to come up with another formulation that addresses the decisionmaker's need for nuance and your need to be comfortable with your analysis.
- What if the requirement does not demand a nuanced estimate? What if all the decisionmaker needed to know was whether the economy of Yougaria was likely to grow? He/She doesn't need to know any more to make his/her decision. In fact, spending time and effort to add nuance would actually be counterproductive. In this case, there is no need to add nuance. Answer the question and move on. That said, my experience suggests that this condition is rather more rare than not. Even when DMs say they just need a "simple" answer, they often actually needs something, well, more nuanced. Whether this is the case or not is something that should be worked out in the requirements process. I am currently writing a three part series on this and you can find Part 1 here and Part 2 here. Part 3 will have to wait until a little later in the summer.
- What if all this nuance makes my estimate sound clunky? So, yeah. An estimate with six clauses in it is going to be technically accurate and very nuanced but sound as clunky and awkward as a sentence can sound. Well-written estimates fall at the intersection of good estimative practice and good grammar. You can't sacrifice either, which is why they can be very hard to craft. The solution is, of course, to either refine your single estimative sentence or to break up the estimative sentence into several sentences. In my next post on this, where I will talk about "due to's and "despite's", I will give you a little analytic sleight of hand that can help you with this problem.
Posted by
Kristan J. Wheaton
at
9:30 AM
0
comments
Labels: analysis, estimates, intelligence, nuance
Monday, March 13, 2017
Learn IMINT? Stop Looting? Yep, It's Been A Good Day!
I think I can. Left of the main road there appear to be three looting pits. I also think I see some more pits to the right of the road at the base of the first row of small hills. They might be vegetation but the shadowing and the distribution suggest looting - at least to me.
How did I learn to spot looting pits? I joined the GlobalXplorer Project!
Here's how National Geographic's GlobalXplorer Project describes itself:
"GlobalXplorer is an online platform that uses the power of the crowd to analyze the incredible wealth of satellite images currently available to archaeologists. Launched by 2016 TED Prize winner and National Geographic Fellow, Dr. Sarah Parcak, as her “wish for the world,” GlobalXplorer aims to bring the wonder of archaeological discovery to all, and to help us better understand our connection to the past. So far, Dr. Parcak’s techniques have helped locate 17 potential pyramids, in addition to 3,100 potential forgotten settlements and 1,000 potential lost tombs in Egypt — and she's also made significant discoveries in the Viking world and Roman Empire."In order to accomplish this mission, the GlobalXplorer Project puts you through a brief tutorial that teaches you how to spot looting of archaeological sites. It then unleashes you and other members of the project onto a dataset of thousands of satellite photographs of Peru like the one above.
Your answer to the question "Is there looting going on in this picture?" is then compared with hundreds of other answers from different people looking at the same picture. Pretty quickly the crowd forms a consensus that allows project managers to focus scarce local enforcement and preservation resources.
GlobalXplorer, like the Satellite Sentinel Project and other non-profit efforts, takes advantage of aerial imagery and imagery analysis techniques formerly familiar to only highly trained intelligence professionals. In so doing, GlobalXplorer also creates an excellent tool for exposing intelligence studies students to some of the tradecraft of the modern imagery analyst.
I recently used the project in precisely this way in a class I am teaching called Collection Operations for Intelligence Analysts. The course is designed to expose analysts to the difficulties inherent in many modern collection operations. My hope is that by knowing more about collectors and what they do, the students will become better analysts - and maybe catch a few grave robbers in the process!
Posted by
Kristan J. Wheaton
at
9:00 AM
1 comments
Labels: analysis, geospatial intelligence, IMINT, intelligence, satellites
Friday, December 4, 2015
The Umbrella Man: A Must-See Cautionary Tale About The Inherent Unlikelihood of Conspiracy
This is not to say that there are no conspiracies, but only to say that analysts should be cautious about leaping to that kind of conclusion at the outset. (If you can't see the video, click on this link to view on the NY Times page)
Posted by
Kristan J. Wheaton
at
1:50 PM
3
comments
Labels: analysis, assassination, conspiracy, intelligence, John F. Kennedy
Tuesday, September 8, 2015
Fermi Questions: Creating Intelligence Without Collection
Collection is, for many, a fundamental part of and, in extreme cases, the essential purpose of, intelligence. What would we be without all our drones and spies and sensors?
What if I told you that you can do intelligence without any collection at all?
You probably wouldn't believe me ... but ... you'd likely admit that the advantages would be substantial. It would be blazingly fast - no waiting around for satellites to come into position or agents to report back. It would be mindnumbingly safe - virtually no footprint, no assets to risk, no burn notices to issue. It could reduce as much as 90% of the uncertainty in any given intelligence problem at essentially zero cost.
What is this prodigious procedure, this miracle methodology, this aspirational apex of analytic acumen?
Fermi questions.
Enrico Fermi was a mid-twentieth century physicist who created the first nuclear reactor. He also taught physics at the University of Chicago. He liked to ask his students questions like, "How many piano tuners are there in Chicago?"
In the pre-internet days, this kind of question required a tedious trip through the phone book to determine the number. Even today, using brute force to answer this question is not a trivial exercise. Students almost always balked at the work involved.
Fermi's approach, however, was different. He wasn't asking, "What is the most direct route to the answer to this problem?" Instead he asked a slightly different and, for intelligence purposes, vastly more useful, question: "How close can I get to the answer with what I already know?"
So. What did Fermi already know? Well, the population of Chicago is about 3 million and from this he could immediately devise that there could be no more than 3 million piano tuners and that the minimum was none. That may not sound particularly useful but just recognizing these facts already limits the problem in useful ways and points the way towards how to make the estimate better.
We know, for example, that the number of piano tuners has to be driven by the number of pianos in Chicago. How many of those 3 million people have pianos? Here we could tap into our own experience. How many people do you know? How many of them have pianos in their houses?
Some will say 1 in 10. Some might say 1 in 100. Even this wide range is very useful. Not only does it narrow the problem significantly but also it highlights one way in which we could get a better estimate if we absolutely have to (i.e get a more exact number of people with pianos in their houses). But we want to do this without collection so let's carry on!
With the average household being a shade under 4 people, we can estimate that there are about 750,000 households in Chicago. We can further refine that to between 75,000 and 7500 pianos (depending on whether you thought 1 in 10 households had a piano or 1 in 100).
Oh, I know what you are thinking! What about all the non-household pianos - at schools and such - that you are conveniently leaving out. I would say that my high end estimate of the number of pianos includes them and my low end estimate does not so they are in there somewhere. It is a "good enough" answer for right now for me. For you that might not be the case, however, so you can make your own estimates about what these numbers might be and put them into the mix.
Working about 250 days a year (weekends, vacation and holidays excluded) on about 2 pianos a day means that Chicago needs between 150 and 15 piano tuners.
How many piano tuners are there really in Chicago? Wolfram Alpha is one of the best search engines to use to answer these kinds of questions. It permits users to ask natural language questions and then dips deeply into public databases to extract precise answers. When asked, "How many piano tuners are there in Chicago?" this is what you get:
Note that Wolfram gives us the number of all musical instrument repairers and tuners - 290 as of 2009. Certainly not all of them are piano tuners. In fact, once you consider just how many instruments need to be professionally tuned besides pianos and you subtract the number of repairers of all kinds of instruments that do not tune pianos, you are lucky to have a third of these musical instrument repairers and tuners who actually can tune a piano.
More importantly a third of 290 falls comfortably within the 15-150 limits derived from our Fermi process.
Without leaving our chairs.
Intelligence without collection.
What if relying on Fermi questions results in really wrong answers? First, I could say the same thing about any intelligence methodology. Very few of them have been tested to see if they actually improve forecasting accuracy and all of them take time and resources to implement. All of them can be wrong. Here, at least, both the logic chain and the path to improving the estimate is obvious.
Second, I would ask, what level of precision do you actually need? Norm Augustine, former CEO of Lockheed Martin used to say, "The last 10 percent of performance generates one-third of the cost and two-thirds of the problems." Augustine was talking about airplanes but he could have just as well been speaking of intelligence analysis. Getting ever more narrow estimates costs time and money. Good enough is often - in fact, surprisingly often - good enough.
Third, it is unlikely to give you really wrong answers - say one or two orders of magnitude off. This is one of the best benefits of going through the Fermi process. It allows you to have a good sense of the range in which the right answer will likely fall. For example, if, before you had done a Fermi analysis, someone came up to you and said that there are 100,000 piano tuners in Chicago, you might not question it. A Fermi analysis, however, suggests that either something is really wrong with your logic or, more likely, that the person does not know what they are talking about. Either way, the red flag is up and that might be just enough to prevent a disastrous mistake.
You can easily try this method yourself. Pick a country that you know little about and try to estimate the size of its military based on just a few easily found facts such as population and GDP. Once you have gone through the process, check your answer with an authoritative source such as Janes - oh! - and please do not hesitate to post your results in the comments!
By the way, I routinely use this method to get students to answer all sorts of interesting and seemingly intractable problems like the number of foreign government spies working within the US Intelligence Community. The answer we get is usually right around 100 which always seems to surprise them.
Finally, if you are interested in integrating Fermi Problems into your tradecraft, there are lots of good resources available. One of the best has been put together by the Science Olympiad, which actually holds a Fermi Problem competition each year.
Posted by
Kristan J. Wheaton
at
10:40 AM
10
comments
Labels: analysis, collection, Fermi problems, Fermi questions, intelligence