Titled, Are Talking Heads Blowing Hot Air: An Analysis Of The Accuracy Of Forecasts In The Political Media, the study, complete with detailed annexes and statistical analysis, assessed the accuracy of 26 pundits in the media with regard to political forecasts made in 2008.
The students used something they called a Prognosticator Value Score (PVS) to rank each of the 26. The PVS factors in how many predictions were made, how many were right, how many were wrong and on how many the prognosticators hedged.

The worst? Cal Thomas, with a PVS of -8.7 (You read that right. Negative eight point seven...).
The students were able to confirm much of what Tetlock has already told us: Many things do not matter -- age, race, gender, employment simply had no effect on forecasting accuracy.
The students did find that liberal, non-lawyer pundits tended to be better forecasters but the overall message of their study is that the pundits they examined, in aggregate, were no better than a coin flip.
This is more interesting than it sounds as one of Tetlock's few negative correlations was between a forecaster and his or her exposure to the press. The more exposure, Tetlock found, the more likely the forecaster was to be incorrect. Here, there may be evidence of some sort of "correction" that is made internally by public pundits, i.e. people who make a living, at least in part, making forecasts in the press.
I have a few methodological quibbles with the study. Number of predictions, for example, did not factor into the PVS. Kathleen Parker, for example, made only 6 testable predictions, got 4 right and had a PVS of 6.7. Nancy Pelosi, on the other hand, made 27 testable predictions, got 20 right, but had a PVS of only 6.2.
Despite these minor details, this study is a bold attempt to hold these commentators accountable for their forecasts and the students deserve praise for their obvious hard work and intriguing results.