The SIRs actually measure a number of variables and identifying those that might be most closely associated with the underlying pedagogy of a course are difficult to identify. Instead, I chose to look at just one of the SIR-generated ratings, the Overall Evaluation of the course. This is clearly designed to be an overall indicator of effectiveness. A large change here (in either the positive or the negative direction) would seem to be a clear indication of success or failure from the student's perspective.
Furthermore, my assumption at the beginning of the course was that there would be a large change in one direction or the other. I assumed that students would either love this approach or hate it and that this would be reflected in the SIR results. The chart below, which contains the weighted average of the Overall Evaluation score (1-5 with 5 being best) for all classes taught in a particular year, indicates that I was wrong:
Clearly, while students did not love it, they did not hate it either. The drop in score from recent years could be attributed to a reduction in satisfaction with the class or it could simply be attributed to the fact that the course changed from a fairly well-oiled series of lectures and exercises to something that had the inevitable squeaks and bumps of a new approach. Feedback from the student surveys given after the course was over, while extremely helpful in providing suggestions for improving the class, gave no real insight into the causes of this modest but obvious drop in student satisfaction.
Comparing this chart with the previous one concerning the quality of the final product yields an even more interesting picture:
This chart seems to be saying that the more a student thinks they are getting out of class (as represented in their Overall Evaluation of the course) the better their final strategic intelligence project is likely to be. This holds true, it seems, as long as strategic intelligence is taught through more or less traditional methods of lecture, discussion and classroom exercises. Once the underlying structure of the course is centered on games, however, the students are less satisfied but actually perform better where it matters most – on real-live projects for real-world decisionmakers.
Taken at face value (and ignoring, for the moment, the possibility that this is all a statistical anomaly), a possible explanation is that the students don’t realize what they are getting “for free” from the games-based approach. Other researchers have noted that information that had to be actively taught, assessed, re-taught and re-assessed in other teaching methods is passively (and painlessly) acquired in a games-based environment.
I noted this effect myself in my thesis research into modeling and simulating transitions from authoritarian rule. My goal, in that study, was to develop a predictive model; not to teach students about the target country. One of my ancillary results, however, was that students routinely claimed that they learned more about the target country in three hours of playing the game than in a semester’s worth of study.
This “knowledge for free” aspect of the games-based model was nowhere more obvious than in the fairly detailed understanding of the geography of the western part of the Soviet Union acquired by the students in all three classes while playing the boardgame, Defiant Russia. While this information was available in the form of the game map, learning the geography was not explicitly part of the instructions. Students rapidly understood, however, that they had to understand the terrain in order to maximize their results within the game. Furthermore, an understanding of the geography of the western part of the Soviet Union was critical to the formulation of strategic options.
Next:
What else did you learn?
No comments:
Post a Comment