Part 1 -- Introduction
I want to tell you a story about two weathermen; one good, competent and diligent and one bad, stupid and lazy. Why weathermen? Well, in the first place, they are not intelligence analysts, so I will not have to concern myself with all the meaningless distinctions that might arise if I use a real example. In the second place, they are enough like intelligence analysts that the lessons derived from this thought experiment – sorry, I mean “story” – will remain meaningful in the intelligence domain.
Imagine first the good weatherman and imagine that he only knows one rule: If it is sunny outside today, then it is likely to be sunny tomorrow (I have no idea why he only knows one rule. Maybe he just got hired. Maybe he hasn’t finished weatherman school yet. Whatever the reason, this is the only rule he knows). While the weatherman only knows this one rule, it is a good rule and has consistently been shown to be correct.
His boss comes along and asks him what the weather is going to be like tomorrow. The good weatherman remembers his rule, looks outside and sees sun. He tells the boss, “It is likely to be sunny tomorrow.”
The next day the weather is sunny and the boss is pleased.
Clearly the weatherman was right. The boss then asks the good weatherman what the weather will be like the next day. “I want to take my family on a picnic,” says the boss, “so the weather tomorrow is particularly important to me.” Once again the good weatherman looks outside and sees sun and says, “It is likely to be sunny tomorrow.”
The next day, however, the rain is coming down in sheets. A wet and bedraggled weatherman is sent straight to the boss’ office as soon as he arrives at work. After the boss has told the good weatherman that he was wrong and given him an earful to boot, the good weatherman apologizes but then asks, “What should I have done differently?”
“Learn more rules!” says the boss.
“I will,“ says the weatherman, “but what should I have done differently yesterday? I only knew one rule and I applied it correctly. How can you say I was wrong?”
“Because you said it would be sunny and it rained! You were wrong!” says the boss.
“But I had a good rule and I applied it correctly! I was right!” says the weatherman.
Let’s leave them arguing for a minute and think about the bad weatherman.
This guy is awful. The kind of guy who sets low standards for himself and consistently fails to achieve them, who has hit rock bottom and started to dig, who is not so much of a has-been as a won’t-ever-be (For more of these see British Performance Evaluations). He only knows one rule but has learned it incorrectly! He thinks that if it is cloudy outside today, it is likely to be sunny tomorrow. Moreover, tests have consistently shown that weathermen who use this rule are far more likely to be wrong than right.
The bad weatherman’s boss asks the same question: “What will the weather be like tomorrow?” The bad weatherman looks outside and sees that it is cloudy and he states (with the certainty that only the truly ignorant can muster), “It is likely to be sunny tomorrow.”
The next day, against the odds, the day is sunny. Was the bad weatherman right? Even if you thought he was right, over time, of course, this weatherman is likely to be wrong far more often than he is to be right. Would you evaluate him based solely on his last judgment or would you look at the history of his estimative judgments?
There are several aspects of the weathermen stories that seem to be applicable to intelligence. First, as the story of the good weatherman demonstrates, the traditional notion that intelligence is either “right” or “wrong” is meaningless without a broader understanding of the context in which that intelligence was produced.
Second, as the story of the bad weatherman revealed, considering estimative judgments in isolation, without also evaluating the history of estimative judgments, is a mistake. Any model for evaluating intelligence needs to (at least) take these two factors into consideration.
Tomorrow: A Model For Evaluating Intelligence
1 comment:
Delightful series.
I have often asked myself (first as a professional forecaster, and later as an intelligence professional) what does 60% chance of rain mean?
Will it rain 60% of the day?
Will it rain over 60% of the area at some point in the day?
Or is there a 60% chance of any rain in the area, with a 40% chance that there will be no rain within the area?
Does it have some other meaning not described here?
We wish to know the future and so we treat the forecast or the intelligence as factual rather than estimative. The real problem isn't the quality of the intelligence as much as the quality of the decision. The decision-maker wants the forecaster to make the decision FOR him. If the decision is wrong, he wants someone to blame instead of himself.
Post a Comment