At the top end of the analytic art sits the estimate. While it is often useful to describe, explain, classify or even discuss a topic, what, as Sun Tzu would say, "enables the wise sovereign and the good general to strike and conquer, and achieve things beyond the reach of ordinary men, is foreknowledge." Knowing what is likely (or unlikely) to happen is much more useful when creating a plan than only knowing what is happening.
Estimates are like pizza, though. There are many different ways to make them and many of those ways are good. However, with our young analysts, just starting out in the Mercyhurst program, we try to teach them one good, solid, never fail way to write an estimate. You can sort of think of it as the pepperoni pizza of estimates.
Here's the formula:
- Good WEP +
- Nuance +
- Due to's +
- Despite's +
- Statement of AC =
- Good estimate!
Outline of this article (Click on link to see full map) |
Good (Best!) WEPs
Let's start with what makes a good Word of Estimative Probability - a WEP. Note: Linguistic experts call these Verbal Probability Expressions and if you want to dive into the literature - and there's a lot - you should use this phrase to search for it.
WEPs should first be distinguished from words of certainty. Words of certainty, such as "will" and "won't" typically don't belong in intelligence estimates. These words presume that the analyst has seen the future and can speak with absolute conviction about it. Until the aliens get back with the crystal balls they promised us after Roswell, it's best if analysts avoid words of certainty in their estimates.
Notice I also said "good" WEPs, though. A good WEP is one that effectively communicates a range of probabilities and a bad WEP is one that doesn't. Examples? Sure! Bad WEPs are easy to spot: "Possibly", "could", and "might" are all bad WEPs. They communicate ranges of probability so broad that they are useless in decisionmaking. They usually only serve to add uncertainty rather than reduce it in the minds of decisionmakers. You can test this yourself. Construct an estimate using "possible" such as "It is possible that Turkey will invade Iraq this year." Then ask people to rank the likelihood of this statement on a scale of 1-100. Ask enough people and you will get everything from 1 TO 100. This is a bad WEP.
Good WEPs are generally interpreted by listeners to refer to a bounded range of probabilities. Take the WEP "remote" for example. If I said "There is a remote chance that Turkey will invade Iraq this year" we might argue if that means there is a 5% chance or a 10% chance but no one would argue that this means that there is a 90% chance of such an invasion.
The Kesselman List |
Before I finish, let me say a word about numbers. It is entirely reasonable and, in fact, may well be preferable, to use numbers to communicate a range of probabilities rather than words. In some respects this is just another way to make pizza, particularly when compared to using a list where words are explicitly tied to a numerical range of probabilities. Why then, do I consider it the current best practice to use words? There are four reasons:
- Tradition. This is the way the US National Security Community does it. While we don't ignore theory, the Mercyhurst program is an applied program. It seems to make sense, then, to start here but to teach the alternatives as well. That is what we do.
- Anchoring bias. Numbers have a powerful place in our minds. As soon as you start linking notoriously squishy intelligence estimates to numbers you run the risk of triggering this bias. Of course, using notoriously squishy words (like "possible") runs the risk of no one really knowing what you mean. Again, a rational middle ground seems to lie in a structured list of words clearly associated with numerical ranges.
- Cost of increasing accuracy vs the benefit of increasing accuracy. How long would you be willing to listen to two smart analysts argue over whether something had an 81% or an 83% chance of happening? Imagine that the issue under discussion is really important to you. How long? What if it were 79% vs 83%? 57% vs 83%? 35% vs 83%? It probably depends on what "really important" means to you and how much time you have. The truth is, though, that wringing that last little bit of uncertainty out of an issue is what typically costs the most and it is entirely possible that the cost of doing so vastly exceeds the potential benefit. This is particularly true in intelligence questions where the margin of error is likely large and, to the extent that the answers depend on the intentions of the actors, fundamentally irreducible.
- Buy-in. Using words, even well defined words, is what is known as a "coarse grading" system. We are surrounded with these systems. Our traditional, A, B, C, D, F grading system used by most US schools is a coarse grading system as is our use of pass/fail on things like the driver's license test. I have just begun to dig into the literature on coarse grading but one of the more interesting things I have found is that it seems to encourage buy-in. We may not be able to agree on whether it is 81% or 83% as in the previous example, but we can both agree it is "highly likely" and move on. This seems particularly important in the context of intelligence as a decision-support activity where the entire team (not just the analysts) have to take some form of action based on the estimate.
- The GDP of Yougaria is likely to grow.
- The GDP of Yougaria is likely to grow by 3-4% over the next 12 months.
- What if I don't have the evidence to support a more nuanced estimate? Look at the second estimate above. What if you had information to support a growing economy but not enough information (or too much uncertainty in the information you did have) to make an estimate regarding the size and time frame for that growth? I get it. You wouldn't feel comfortable putting numbers and dates to this growth. What would you feel comfortable with? Would you be more comfortable with an adverb ("grow moderately")? Would you be more comfortable with a date range ("over the next 6 to 18 months")? Is there a way to add more nuance in any form with which you can still be comfortable as an analyst? The cardinal rule here is to not add anything that you can't support with facts and analysis - that you are not willing to personally stand behind. If, in the end, all you are comfortable with is "The economy is likely to grow" then say that. I think, however, if you ponder it for a while, you may be able to come up with another formulation that addresses the decisionmaker's need for nuance and your need to be comfortable with your analysis.
- What if the requirement does not demand a nuanced estimate? What if all the decisionmaker needed to know was whether the economy of Yougaria was likely to grow? He/She doesn't need to know any more to make his/her decision. In fact, spending time and effort to add nuance would actually be counterproductive. In this case, there is no need to add nuance. Answer the question and move on. That said, my experience suggests that this condition is rather more rare than not. Even when DMs say they just need a "simple" answer, they often actually needs something, well, more nuanced. Whether this is the case or not is something that should be worked out in the requirements process.
- What if all this nuance makes my estimate sound clunky? So, yeah. An estimate with six clauses in it is going to be technically accurate and very nuanced but sound as clunky and awkward as a sentence can sound. Well-written estimates fall at the intersection of good estimative practice and good grammar. You can't sacrifice either, which is why they can be very hard to craft. The solution is, of course, to either refine your single estimative sentence or to break up the estimative sentence into several sentences. In my next post on this, where I will talk about "due to's and "despite's", I will give you a little analytic sleight of hand that can help you with this problem.
Due to's And Despite's
Consider again the estimate from above: "The GDP of Yougaria is likely to grow 3-4% over the next 12 months." Why? Why do you, the analyst, think this is the case?
Typically, there are a few key facts and some accompanying logic that are acting as drivers of these kinds of estimates. It may have something to do with trade, for example, or with new economic opportunities opening up in the country. It may be more about where the country is in the business cycle than anything else. For whatever reason, these are the critical facts and logic that underpin your entire estimate. If you are wrong about these drivers, because of incomplete collection, poor analysis or deliberate deception, your estimate is likely wrong as well.
I call these factors "due to's" because you can easily see them as a "due to" clause added to the estimate:
"Due to a substantial increase in trade and the discovery of significant oil reserves in the northern part of the country, the GDP of Yougaria will likely increase 3-4% over the next 12 months."If "due to's" are driving your faith in your estimate, "despite" clauses are the ones undermining it. In any non-trivial exercise in estimation there are likely many facts which undermine your estimate. In the example above, yes, there was an uptick in trade and the oil reserves are great but what about the slight increase in unemployment last month? Or the reduction in consumer confidence?
Much more than mere procatalepsis (gosh, I love that word...), the true intent behind the "despite" clause is to be intellectually honest with the decisionmaker you are supporting as an intelligence professional. In short, you are saying two things to that DM. First, "I recognize that not all of the facts available support my estimate" and, second, "despite this, I still believe my estimate is accurate."
How might that play itself out in our example?
Despite recent increases in unemployment, the GDP of Yougaria is likely to grow 3-4% over the next 12 months. Increases in trade have been strong and the recently discovered oil reserves in the northern part of the country will likely drive significant growth over the next year.Or
Due to a substantial increase in trade and the discovery of significant oil reserves in the northern part of the country, the GDP of Yougaria will likely increase 3-4% over the next 12 months. While unemployment recently ticked upward, this is likely due to seasonal factors and is only temporary.These are just examples, of course, and the actual formulation depends on the facts at hand. The goal remains the same in all cases - here's what I think, here's why, here's why not and here's why the "why nots" don't matter.
Analytic Confidence
If the estimate is what the analyst thinks is likely or unlikely to happen then analytic confidence can most easily be thought of as the odds that the analyst is wrong. Imagine two analysts in two different parts of the world have been asked to assess Yougaria's economy for the next year. One is a beginner with no real experience or contacts in Yougaria. His sources are weak and he is under considerable time pressure to produce. The other analyst, operating wholly independently of the first, is a trained economist with many years experience with Yougaria. His sources are excellent and he has a proven track record of estimating Yougaria's economic performance.
Now, imagine that both of them just so happen to come to the exact same estimative conclusion - Yougaria's GDP is likely to grow 3-4% over the next 12 months. Both report their estimative conclusions to their respective decisionmakers.
It is not too difficult to see that the decisionmaker of the first analyst might be justifiably hesitant to commit significant resources based on this estimate of Yougaria's economic performance. Absent additional analysis, it is quite obvious that there are a number of good reasons why the analyst in this case might be wrong.
The decisionmaker supported by the second analyst is in exactly the opposite position. Here there are very good reasons to trust the analyst's estimate and to commit to courses of action that are premised on its accuracy. In the first case we could say that the analytic confidence is low while in the second case we could say it is high.
What are the factors that suggest whether an analyst is more likely to be right or wrong? Some of the earliest research on this was done by a Mercyhurst alum, Josh Peterson. In his thesis he went out and looked for research based reasons why a particular analyst is more likely to be right or wrong. He managed to identify seven reasons:
- How good are your sources?
- How well do your independent sources corroborate each other?
- Are you a subject matter expert? (This is less important than you might think, however.)
- Did you collaborate with other analysts and exactly how did you do that? (Some methods are counterproductive.)
- Did you structure your thinking in a way proven to improve your forecasting accuracy? (A number of commonly taught techniques don't work particularly well BTW.)
- How complex did you perceive the task to be?
- How much time pressure were you under?
Due to a substantial increase in trade and the discovery of significant oil reserves in the northern part of the country, the GDP of Yougaria will likely increase 3-4% over the next 12 months. While unemployment recently ticked upward, this is likely due to seasonal factors and is only temporary.
Analytic confidence in this estimate is moderate. The analyst had adequate time and the task was not particularly complex. However, the reliability of the sources available on this topic was average with no high quality sources available for the estimate. The sources available did tend to corroborate each other however, and analyst collaboration was very strong.Final Thoughts
This is not not the only way to write an effective estimate. There are other formulations that likely offer equal or even greater clarity. There is clearly a need for additional research in virtually all of the elements outlined here. There is also room for more creative solutions that convey the degree of uncertainty with more precision, encourage analyst buy-in, and communicate all of that more effectively to the decisionmakers supported.
The overly dogmatic "formula" discussed here is, however, a place to start. Particularly useful with entry-level analysts who may be unused to the rigor necessary in intelligence analysis, this approach helps them create "good enough" analysis in a relatively short time while providing a sound basis for more advanced formulations.