Part 1 -- Introduction
Part 2 -- To Kent And Beyond
Part 3 -- The Exercise And Its Learning Objectives
Given the withering criticism offered by Kent and Schrage and the wide range of other studies regarding the appropriate interpretation of Words of Estimative Probability (WEPs), it is fairly easy to get intelligence studies students to see the problems with using "bad" WEPs in their estimative statements. Bad WEPS, which include such words as "could", "may", "might" and "possible", convey such a broad range of probabilities that, in the best case, they do little to reduce a decisionmaker's uncertainty concerning an issue and, at worst, create the sense, in the decisionmaker's mind, that the analyst is simply trying to cover his or her backside in the event of a failed estimative conclusion.
Student analysts, then, are generally happy to see that the National Intelligence Council (NIC) has "solved" this problem with their notional scale of appropriate WEPs (the scale is available on page five of the latest Iran NIE and was discussed earlier in this series). This scale not only provides adequate gradations of probability (translated into words, of course) but also avoids the use of either numbers or bad WEPs; both of which, for different reasons, appear to be goals of the NIC in these public documents.
While there are many possible ways to explore with students the data generated by the exercise described in Part 3, my primary teaching point is to disabuse entry level analysts of the idea that the problems regarding communicating estimative conclusions to decisionmakers have been, in any way, "solved". Rather, I want my students to come away with the idea that using WEPs in a more-or-less formal way, while currently the best practice, is a system that can still be improved upon; that it is an important question of intelligence theory that deserves additional research and study.
I generally start the review of the results of the exercise by exploring how "rational" (in a classical economic sense) the students were in assigning point values and ranges to the various WEPs. I point out the words are clearly ordered in increasing order of likelihood and it makes sense, absent other information, to assign levels of probability at equal intervals to each of the words. There are eight words and 100 possible percentage points and a wholly "rational" person would place each word, therefore, about 12% points apart. When you ask students, however, to look at the differences between the point values of each word they will typically see nothing that comes even close to this rational approach. The vast majority of students will have assigned probabilities intuitively with little regard for the mathematic difference between one word and another.
The results are even worse when you ask students to look at the range of values for each word. Again, the rational person would have assigned equal ranges for each of the words but students typically do not. A good exercise to do at this point is to pick a word and find out who in the class had the lowest score and who in the class gave the highest score and to then ask the students to justify their decisions for doing so. This range is typically quite broad and the justifications for selecting one number over another are typically quite vague.
Inevitably, there will be a handful of students in each class who have, in fact, done the math and calculated both the point values and the ranges accordingly. This exercise offers two places to highlight the problems with this approach. First, the exercise separates out the words "probably" and "likely". That is not the case with the NIC's chart which treats the two words as synonymous. While it is quite surprising for the NIC to treat these words this way since much of the literature does not indicate that people actually see them as synonymous, the net effect in this exercise is to create a learning opportunity. It is rare for a student to have taken into account the idea that two words may be partly or largely synonymous in their mathematical calculations.
Likewise, there is an even better chance for learning in examining the results for the "even chance" WEP. "Even chance" would appear to mean exactly what it says -- an even chance, 50-50. Some students will inevitably interpret it in this literal way and assign a point probability of 50% to the WEP and also mark both its high and low scores at 50%. Other students will see the phrase more generally and, while typically giving it a point value of 50%, will also include a range of values around it such that "even chance" could mean anything from 40-60%! Of course, there is no right answer here, both sides can make valid arguments, and fomenting this discussion is the ultimate point of this part of the exercise.
The relative firmness of "even chance" coupled with the synonymity problem described earlier also lends itself to a further examination of the mathematical approach. Few of the mathematicians in the class will have noticed that there are three WEPs below even chance and four above it, creating an uneven distribution centering on the 50% (more or less) probability ascribed to the phrase "even chance". A wholly logical approach would lead to an uneven distribution of both the point values and the ranges for those WEPs below "even chance" when compared with those WEPs above it.
Students are typically confused by the end of this exercise. While they do (or should) fully understand the problems with waffle words such as "could", "may", "might" and "possible", and were willing to applaud the NIC's efforts at standardization, they now see these "approved" words as far more squishy than they had previously thought. Good. This is exactly the time to reinforce the message laid out at the beginning of this post; to bring students back full circle. As analysts, they have an obligation to communicate as effectively as possible the results of their intelligence analysis to decisionmakers. What this exercise and the learning that went on before it demonstrate is that there is not yet a perfect way to do this; there is only a best practice that tries to balance the competing concerns. In my mind, it is the degree to which students come to understand not only the best practice but also these concerns that marks the difference between a well-trained analyst and a well-educated one.
Tomorrow -- A Surprise Ending
No comments:
Post a Comment