How to Write a Mindnumbingly Dogmatic (but Surprisingly Effective) Estimate (Outline) |
Here's the formula:
- Good WEP +
- Nuance +
- Due to's +
- Despite's +
- Statement of AC =
- Good estimate!
WEPs should first be distinguished from words of certainty. Words of certainty, such as "will" and "won't" typically don't belong in intelligence estimates. These words presume that the analyst has seen the future and can speak with absolute conviction about it. Until the aliens get back with the crystal balls they promised us after Roswell, it's best if analysts avoid words of certainty in their estimates.
Notice I also said "good" WEPs, though. A good WEP is one that effectively communicates a range of probabilities and a bad WEP is one that doesn't. Examples? Sure! Bad WEPs are easy to spot: "Possibly", "could", and "might" are all bad WEPs. They communicate ranges of probability so broad that they are useless in decisionmaking. They usually only serve to add uncertainty rather than reduce it in the minds of decisionmakers. You can test this yourself. Construct an estimate using "possible" such as "It is possible that Turkey will invade Syria this year." Then ask people to rank the likelihood of this statement on a scale of 1-100. Ask enough people and you will get everything from 1 TO 100. This is a bad WEP.
Good WEPs are generally interpreted by listeners to refer to a bounded range of probabilities. Take the WEP "remote" for example. If I said "There is a remote chance that Turkey will invade Syria this year" we might argue if that means there is a 5% chance or a 10% chance but no one would argue that this means that there is a 90% chance of such an invasion.
The Kesselman List |
Before I finish, let me say a word about numbers. It is entirely reasonable and, in fact, may well be preferable, to use numbers to communicate a range of probabilities rather than words. In some respects this is just another way to make pizza, particularly when compared to using a list where words are explicitly tied to a numerical range of probabilities. Why then, do I consider it the current best practice to use words? There are four reasons:
- Tradition. This is the way the US National Security Community does it. While we don't ignore theory, the Mercyhurst program is an applied program. It seems to make sense, then, to start here but to teach the alternatives as well. That is what we do.
- Anchoring bias. Numbers have a powerful place in our minds. As soon as you start linking notoriously squishy intelligence estimates to numbers you run the risk of triggering this bias. Of course, using notoriously squishy words (like "possible") runs the risk of no one really knowing what you mean. Again, a rational middle ground seems to lie in a structured list of words clearly associated with numerical ranges.
- Cost of increasing accuracy vs the benefit of increasing accuracy. How long would you be willing to listen to two smart analysts argue over whether something had an 81% or an 83% chance of happening? Imagine that the issue under discussion is really important to you. How long? What if it were 79% vs 83%? 57% vs 83%? 35% vs 83%? It probably depends on what "really important" means to you and how much time you have. The truth is, though, that wringing that last little bit of uncertainty out of an issue is what typically costs the most and it is entirely possible that the cost of doing so vastly exceeds the potential benefit. This is particularly true in intelligence questions where the margin of error is likely large and, to the extent that the answers depend on the intentions of the actors, fundamentally irreducible.
- Buy-in. Using words, even well defined words, is what is known as a "coarse grading" system. We are surrounded with these systems. Our traditional, A, B, C, D, F grading system used by most US schools is a coarse grading system as is our use of pass/fail on things like the driver's license test. I have just begun to dig into the literature on coarse grading but one of the more interesting things I have found is that it seems to encourage buy-in. We may not be able to agree on whether it is 81% or 83% as in the previous example, but we can both agree it is "highly likely" and move on. This seems particularly important in the context of intelligence as a decision-support activity where the entire team (not just the analysts) have to take some form of action based on the estimate.
Wonderful summary Kris.
ReplyDeleteThe data and graphic in this piece dramatically illustrate the difference between good and bad WEPs
ReplyDeletehttps://hbr.org/2018/07/if-you-say-something-is-likely-how-likely-do-people-think-it-is