Showing posts with label experimental scholarship. Show all posts
Showing posts with label experimental scholarship. Show all posts

Friday, March 23, 2012

Part 13 - The Whole Picture (Let's Kill The Intelligence Cycle)

Part 9 -- Departures From The Intelligence Cycle
Part 10 -- The New Intelligence Process 
Part 11 -- The New Intelligence Process:  The First Picture 
Part 12 -- The New Intelligence Process:  The Second Picture 



In the end, whether you accept this new model of the intelligence process or not, it is clear that the hoary image of the intelligence cycle needs to be put to rest.  Whether you would do that with full honors or, as I advocate, with the use of explosives, is irrelevant.  The cycle, as should be clear by now, needs to go.

To summarize, the cycle fails on three counts at least:  We cannot define what it is and what it isn't, it does not match the way intelligence actually works in the 21st Century and it does not help us explain our processes to the decisionmakers we support.  Efforts to fix these flaws have not worked and, furthermore, this is all widely recognized by those who have studied the role and impact of the cycle. 

In addition, the community of intelligence professionals (and I include academics who study intelligence in this group) will have to be the ones to lay the cycle to rest.  Not only does no one else care, but also the community of intelligence professionals has, as the WMD report noted, "an almost perfect record of resisting external recommendations." 

Yes, the interregnum will be difficult.  The decisionmakers we support, the professionals with whom we work and the students we teach will all ask -- and deserve -- good answers.  These answers will come slowly at first.  In fact, at the outset, we may only be able to "teach the controversy", as it were.

Hopefully, over time, though, the need for a new vision of the intelligence process will drive intellectual curiosity and, through the iterative process of creation and destruction, something more robust will emerge; an improved model that will stand the tests of the next 60 years.   While I have clearly already placed my bets in this regard, I will be happy if the community of intelligence professionals merely recognizes the need to move beyond its historical constraints, accepts this siren's call for what it is, plugs its ears and sails off in a new direction - any direction.

Because anything would be better than continuing to pretend that the world has not really changed since the 1940's.   Anything would be better than continuing to spend countless wasted hours explaining and attempting to justify something that should have been retired long ago.  Anything, in short, would be better than continuing to lie to ourselves.

Wednesday, March 21, 2012

Part 12 -- The New Intelligence Process: The Second Picture (Let's Kill The Intelligence Cycle)

Part 9 -- Departures From The Intelligence Cycle
Part 10 -- The New Intelligence Process 
Part 11 -- The New Intelligence Process:  The First Picture


(Note:  I started this series of posts many months ago with the intent of completing it in short order.  Life, as it so often does, got in the way...  If you are new to the series or you have forgotten what the excitement was all about, I recommend beginning at the beginning.  For the rest of you, thank you for your patience!)


At the highest level, intelligence clearly supports the decisionmaking process.  Understanding this is a first step to understanding what drives intelligence requirements and what defines good intelligence products.  This is the message of the first picture.

But what about the details?  Broad context is fine as far as it goes, but how should the modern intelligence professional think about the process of getting intelligence done?  The second picture is designed to answer these questions.
The Second Picture

The single most important thing to notice about this image is that it imagines intelligence as a parallel rather than as a sequential process.  In this image of the process, there are four broad themes, or sub-processes, moving across time from a nebulous start to a fuzzy finish, with each theme rising to a high point in terms of emphasis at different points in the process.  Also intended by this image is the idea that each theme constantly reflects back and forth among the other three, influencing them as they each influence each other at every point in time.

Let me anticipate an initial objection to this picture -- that the intelligence process has a "start" and a "finish".  The intelligence function, to be sure, is an ongoing one and this was one of the implied lessons of the first picture.  Having made that point there, here I think it is important to focus on how intelligence products are actually generated.  In this respect, clearly, there is a point at which a question (an intelligence requirement) is asked.  It may be indistinct, poorly formed or otherwise unclear, but the focus of an intelligence effort does not exist in any meaningful way until there is a question that is, in some way, relevant to the decisionmaking process the intelligence unit supports.

Likewise, there is a finish.  It may take place in an elevator or in a formal brief, in a quick email or in a 50 page professionally printed and bound document, but answering those questions, i.e. the dissemination of the intelligence product, in whatever form, signifies the end of the process.  Yes, this process then begins immediately anew with new questions, and yes, there are always multiple questions being asked and answered simultaneously but neither observation invalidates the general model.

What of the sub-processes, though?  What are they and how do they relate to each other?  The four include mental modeling, collection of relevant information, analysis of that information and production (i.e. how the intelligence will be communicated to the decisionmakers).

Mental Modelling


Until intelligence becomes a process where machines exclusively speak only to other machines, the mental models carried around by intelligence professionals and the decisionmakers they support will be an inseparable part of the intelligence process.  While most intelligence professionals readily acknowledge the strengths and weaknesses of human cognition, one of the most important qualities, in my mind, of this model is that it embeds these strengths and weaknesses directly into the process and acknowledges the influence of the human condition on intelligence.

These mental models typically contain at least two kinds of information, information already known and information that needs to be gathered.  Analysts rarely start with a completely blank slate.  In fact, a relatively high level of general knowledge about the world has been demonstrated to significantly improve forecasting accuracy across any domain of knowledge, even highly specialized ones (Counter-intuitively, there is good evidence to suggest that high degrees of specialized knowledge, even within the domain under investigation does not add significantly to forecasting accuracy).   

The flip side of this coin is psychological bias, which has a way of leading analysts astray without them even being aware of it.  An extensive overview of these topics is beyond the scope of this post but it is safe to say that, whether implicit or explicit, these models, containing what we know, what we think we need to know and how our minds will process all this information, emerge as the intelligence professional thinks about how best to answer the question.   

Typically, at the outset of the intelligence process is is this modeling function that receives the most emphasis.  Figuring out how to think about the problem, understanding what kind of information needs to be collected and identifying key assumptions in both the questions and the model are necessary to some degree before the other functions can begin in earnest.  This is particularly true with a new or particularly complex requirement.  Furthermore, this modeling function is often informal or even implicit.  It is rare, in current practice, to see the mental model on which collection is planned and analysis conducted made explicit.  This is unfortunate since making the model explicit has proven, if done properly, to accelerate the other sub-processes, limit confusion within a team and produce more accurate forecasts.

Modeling should go on throughout the entire intelligence process, however.  As new information comes in or analysis gets produced, the model may well grow, shrink or morph as the concepts and the relationships between those concepts become more clear.  At some point (typically early) in the intelligence process, however, the emphasis shifts away from modeling and towards collecting, analyzing and producing.  While mental modeling doesn’t become unimportant, it does begin to lose importance as less time is devoted to modeling and more to the other three functions. 

Collection
 
Typically, the next sub-process to take precedence is collection.  Again, as with modeling, collection begins almost as soon as a rudimentary requirement forms in the mind of the intelligence professional.  People naturally begin to draw on their own memories and, if the question is complicated enough, begin to look for additional information to answer the question.  In more complex questions, where the information needs are clearly higher, the intelligence professional even comes up with a collection plan and tasks others to collect the information in order to help address the requirement. 

Collection, like modeling, never stops.  Intelligence professionals will continue to collect information relevant to the particular requirement right up to the day the final product is published.  In fact, collection on a particularly difficult problem (i.e. almost all of them) will often continue after publication.  Decisionmakers and analysts alike want to know if they were correct in their key assumptions, how accurate the final product was and all understand a need to continue to track particularly important requirements over time.  

All that said, collection does tend to lose importance relative to other functions over time.  Economists call this diminishing returns and it reflects a general rule that collection efforts, when considered across the entire spectrum of activities, from no knowledge about a subject to the current level of knowledge about a subject, typically add less and less genuinely new information over time.  Again, this is not to say that collection becomes unimportant, it is simply a reflection of the fact that other processes tend to increase in importance with respect to collection at some point in the process.

Analysis

The next sub-process to take precedence is analysis.  As with both modeling and collection, analysis begins almost immediately.  Tentative answers leap to mind and, in simple cases or where time is a severe constraint, these initial responses may have to do.  Analysis doesn’t really move to the forefront, however, until the requirement is understood and enough collection has taken place for the analyst to sense that adequate information exists to begin to go beyond tentative analyses and take a crack at answering the overall question or questions.

Analysis is where the raw material of intelligence, information, gets turned into products that address the decisionmaker’s requirements.  It is also the task most fraught with difficulties.  From the type of information used (typically unstructured) to the methods used to analyze this information to the form of the final product, analysts face enormous practical and psychological difficulties.  While the goal is clear – reduce the decisionmaker’s level of uncertainty – the best ways to get there are often unclear or rely on untested or poorly tested methods. 

Production

The final sub-process is production (which, for our purposes here, also includes dissemination).  As with all the other functions, it, too, begins on day one.  It is clearly, however, the least important function at the outset of the intelligence process.  Still, intelligence professionals do give some thought (and experienced professionals have learned to give more than a little thought) up front to the form and nature of the final product at the beginning of the process.  

Requirements typically come with an implied or explicit “deliverable” associated with them.  Is the answer, for example, to the intelligence requirement to be in the form of a briefing or is it to be a written report?  Knowing this at the outset helps the intelligence professionals tasked with answering the requirement to plan and to identify items along the way that will make the production of the final product easier.  For example, knowing that the final product is to be a briefing, gives the intelligence professionals associated with the project time to identify relevant graphics during the project rather than going back and finding such graphics at the last minute.  Likewise, if the final briefing is to be a written document, the time necessary to write and edit such a product might be substantial and this, in turn, would need to be factored into the planning process.

Production is an incredibly important but often under-appreciated function within the intelligence process.  If intelligence products are not accessible, i.e. packaged with the decisionmaker in mind, then they are unlikely to be read or used.  Under such circumstances, all of the hard work done by intelligence professionals up to this point is wasted.  On the other hand, there is a fine line between making a document or other type of intelligence report accessible and selling a particular position or way of thinking about a problem.  Intelligence professionals have to steer clear of those production methods and “tricks” that can come across as advertising or advocacy.  Production values should not compromise the goal of objectivity.

Likewise, some intelligence professionals associate high production values with pandering to the decisionmaker.  These professionals see adding multimedia, graphics, color and other design features to an intelligence product to be unnecessary “chrome” or “bling”.  These professionals, many from earlier generations, think that intelligence products “should stand on their own” and that the ease with which such “tricks” are used in modern production is not an excuse to deviate from time-honored traditions in production. 

The guiding principle here, of course, is not what the intelligence professional thinks but what the decisionmaker the intelligence professional is supporting thinks.  Some decisionmakers will, of course, prefer their intelligence products in a simple text-based format.  Others, including many business professionals, will want less text and more supporting data, including charts and graphs.  Some (and the demand for this may well increase in the future) will want their reports in a video format for use on their personal multimedia device. 

Intelligence professionals in general, then, will need to have a wider variety of production skills in the future and, while production concerns do not take precedence until closer to the end of the project, the need to think about them at some level permeates the entire project.

Next:  The Whole Picture

Monday, June 6, 2011

Part 10 -- The New Intelligence Process (Let's Kill The Intelligence Cycle)

 
All of the examples examined in the previous sections are really just hypotheses, or guesses, about how the intelligence process works (or should work).  All are based on anecdotal descriptions of the intelligence process as currently conducted solely within the US national security community.  

Few of the models attempted to broaden their applicability to either the business or law enforcement sectors.  Very few of these models are based on any sort of systematic, empirically based research so, even if they more or less accurately describe how intelligence is done today, it remains unclear if these models are the best that intelligence professionals can do. 

Other fields routinely modify and improve their processes in order to remain more competitive or productive.  The traditional model of the intelligence process, the intelligence cycle, has, however, largely remained the same since the 1940's despite the withering criticisms leveled against it and, in a few cases, attempts to completely overthrow it.  

While some might see the cycle's staying power as a sign of its strength, I prefer to see its lack of value to decisionmakers, its inability to shed little (if any) light on how intelligence is actually done and the various intelligence communities' failure to be able to even consistently define the cycle as hallmarks of what is little more than a very poor answer to the important -- and open -- theoretical question:  "What is the intelligence process?"

It is to resolving this question that I will devote the remaining posts in this series.

Next:  The First Picture

Wednesday, June 1, 2011

Part 8 -- Tweaking The Intelligence Cycle (Let's Kill The Intelligence Cycle)

A number of scholars and practitioners have attempted, over the years, to rectify the problems with the intelligence cycle.  While, from a theoretical standpoint, virtually all of these attempts have resulted in a more nuanced understanding of the intelligence process, none has caught on among intelligence professionals and none has been able to de-throne the intelligence cycle as the dominant image of how intelligence works.

These new schools of thought fall into two general patterns:  Those that are tweaking the intelligence cycle in order to bring it closer to reality and those that seek to overhaul the entire image of how intelligence works (which I will discuss tomorrow).

Several authors have sought to modify the intelligence cycle in order to create a more realistic image of how intelligence “really” works.  While some restructuring of the intelligence cycle is done within virtually every intelligence schoolhouse, the four authors most commonly discussed include Lisa Krizan, Gregory Treverton, Mark Lowenthal and Rob Johnston.  These authors seek to build upon the existing model in order to make it more realistic.

From:  Intelligence Essentials For Everyone
Krizan, in her 1999 monograph, Intelligence Essentials For Everyone provides a slightly restructured view of the Intelligence Cycle (see image to the right) and, while quoting Douglas Dearth, states “These labels, and the illustration ..., should not be interpreted to mean that intelligence is a uni-dimensional and unidirectional process. ‘In fact, the [process] is a multidimensional, multi-directional, and - most importantly - interactive and iteractive.’”

From:  Reshaping National Intelligence 
Treverton, in Reshaping National Intelligence In An Age Of Information, outlines a slightly more ambitious version of the cycle.  In this adaptation, Treverton seeks to more completely include the decisionmaker in the process.  You can see a version of Treverton's cycle to the right.

Lowenthal in his classic, Intelligence:  From Secrets To Policy, acknowledges the flaws of the traditional intelligence cycle which he calls “overly simple”.  His version, reproduced below, demonstrates “that at any stage in the process it is possible – and sometimes necessary – to go back to an earlier step.  Initial collection may prove unsatisfactory and may lead policymakers to change the requirements; processing and exploitation or analysis may reveal gaps, resulting in new collection requirements; consumers may change their needs and ask for more intelligence.  And, on occasion, intelligence officers may receive feedback.”  Lowenthal's revised model, more than any other, seems to me to capture that the intelligence process takes place in a time constrained environment.
From Intelligence:  From Secrets To Policy

Perhaps the most dramatic re-visioning of the intelligence cycle, however, comes from anthropologist Rob Johnston in his book, Analytic Culture In The US Intelligence Community.  Johnston spent a year studying the analytic culture of the CIA in the time frame immediately following the events of September 11, 2001.  

His unique viewpoint resulted in an equally unique rendition of the traditional intelligence cycle, this time from a systems perspective.  This complicated vision (reproduced below) includes “stocks” or accumulations of information; “flows” or certain types of activity; “converters” that change inputs to outputs and “connectors”, which tie all of the other parts together.  

While, according to Johnston, “the premise that underlies systems analysis as a basis for understanding phenomena is that the whole is greater than the sum of its parts”, the subsequent model does not seek to replace the intelligence cycle but only to describe it more accurately:  “The elements of the Intelligence Cycle are identified in terms of their relationship with each other, the flow of the process and the phenomena that influence the elements and the flow.”

From:  Analytic Culture In The US Intelligence Community
While each of these models recognizes and attempts to rectify one or more of the flaws inherent in the traditional intelligence cycle and each of the modified versions is a decided improvement on the original cycle, none of these models seeks to discard the fundamental vision of the intelligence process described by the cycle.  

Next:   Departures From The Intelligence Cycle

Tuesday, May 31, 2011

Part 7 -- Critiques Of The Cycle: Cycles, Cycles And More Damn Cycles (Let's Kill The Intelligence Cycle)


While intelligence professionals often tout the intelligence cycle as something unique, to experienced business, law enforcement and national security decisionmakers, the intelligence cycle looks like many other linear decisionmaking processes with which decisionmakers are already familiar.  

Moreover, this familiarity has bred a certain amount of contempt as all of these disciplines are wrestling with re-defining their own processes in light of 21st century technology and systems thinking.  In short, the intelligence cycle not only fails in its attempt to explain the intelligence process but also comes across as an archaic sales pitch to a decisionmaker who is typically all too familiar with the flaws of linear process models.
http://home.ubalt.edu/ntsbarsh/opre640/opre640.htm

Every military officer, policeman or business student who has attended even relatively low level training in their profession is familiar with a model of decisionmaking that typically includes defining the question, collecting information relevant to the question, analyzing alternatives or courses of action, making a recommendation and then communicating or executing the recommendation (see image to the right).  

This model, of course, bears a striking resemblance to the “intelligence cycle”; a resemblance that may fool the uninformed but is unlikely to pass unnoticed by the decisionmakers that intelligence supports.  These decisionmakers, who are never blank slates and rarely outright fools, are also unlikely to accept such a simplistic explanation of the process unless accepting such an explanation serves their own purposes or they simply don't care.

This, in turn, results in two negative consequences for intelligence.  First, decisionmakers will, at best, see intelligence as “nothing special”.  The process used appears, from their perspective, to be just a glorified decisionmaking process.  

At worst, however, decisionmakers will see the “intelligence cycle” as mere advertising puffery, a fancy way of talking about something which could, in their eyes, be defined much more simply using a linear process model (albeit an out-of-date one) with which they are already familiar.  Many private sector intelligence organizations have problems convincing the decisionmakers they support of the importance of intelligence.  Over emphasis on the value of the intelligence cycle, particularly when faced by an educated decisionmaker, might well be part of the problem.

More insidiously, however, such a perception clouds the true role of intelligence in the decisionmaking process.  Decisionmakers, trained in and used to working with the decisionmaking process, will look for intelligence professionals to provide the same kinds of outputs – recommendations – as their process does.  

Intelligence, however, is focused externally, on issues relevant to the success or failure of the organization but fundamentally outside that organization's control.  Intelligence does best when it focuses on estimating the capabilities and limitations of those external forces and poorly when it attempts to make recommendations to operators as the intelligence professional is generally less well informed than others in the organization about the capabilities and limitations of the parent entity. 

In short, because the intelligence cycle creates the impression in the minds of many decisionmakers (particularly those unfamiliar with intelligence but well -educated in their own operational arts), that intelligence is “just like what I do”, only with a different name, the value of intelligence is more difficult to explain to decisionmakers than it needs to be.

Furthermore, once the decisionmakers think they understand the nature of intelligence, the way that nature has been communicated to them predisposes them to ask questions of intelligence that the intelligence professional is poorly positioned to answer.

Next:  Tweaking The Intelligence Cycle

Friday, July 2, 2010

Part 8 -- What Else Did You Learn? (Teaching Strategic Intel Through Games)

Idro wargame mapboard detailImage via Wikipedia
Part 7 -- What Did The Students Think About It?

Many students have provided excellent feedback for improving the course.  The single most requested ‘tweak’ was, surprisingly, to include more games like Defiant Russia.  The old-school boardgame with its dice, hex maps and counters seemed to encourage a thoughtful, collaborative (at least among the players on each team) learning experience. 

In addition, the idea of replaying history was clearly appealing to many of the students.  Only one of the students had played anything similar prior to this class and it was unclear if any would voluntarily play something like Defiant Russia again but the overwhelmingly positive response to the game in the feedback suggests that there is still a place for these types of games in educational environments.

The main problem with a game like Defiant Russia and using it in an educational setting is the amount of time it takes to play.  For two experienced players, the game can move very quickly.  However, when playing it as I did, with two teams of inexperienced players, the first turn can last the better part of an hour.  The popularity of this experience demands, however, that I take an additional look at how I might be able to carve out time for another game like it.

Several other comments surfaced routinely.  First, there was a fairly common request to cut back on the number of games or to cut back on the games as the end of the course approached.  This request seemed to be driven by two separate reasons.  The first was that the lessons learned lost some of their potency, as students had to rapidly drop one game only to pick up and analyze another.  The second was that, for people who did not routinely play games, learning the rules to new games – even casual games -- every couple of days and in addition to the other work the course required  was difficult. 

On the one hand, “more time on fewer subjects” is classic pedagogical advice; on the other, “practice makes perfect” is also sound.  One of my goals was to encourage the students to not only be better but also quicker thinkers; to identify the patterns in complex, confusing issues rapidly and flexibly.  The incessant drumbeat of games over the course of the term seemed to accomplish this. 

Another goal, however, was to lock in knowledge important to the practice of strategic intelligence.  This kind of learning requires reflection and reflection takes time.  Clearly, the right answer lies in properly balancing these competing goals.  How to do that in the context of a specific syllabus is the real question and one that I will spend the next several months pondering.

Another suggestion that seemed to make sense was to do a better job of explaining how games-based learning worked.  I provided students with some explanation and resources early on in the course but decided not to spend much time discussing this unique pedagogical approach.  Given the feedback and the results of this study, it probably makes some sense to discuss this approach more fully with the students.  In fact, it is my intent to give them a copy of the paper on which these posts are based when classes begin in the fall.

Finally, there is one recommendation that I am considering with some hesitation:  Make the connections between the games and the topics covered in the course “more clear”.  My instincts say that this would be a mistake; that the purpose of the course is to challenge students deeply, to make them travel unlit paths in darkened forests, to attempt to climb insurmountable mountains.  I would rather have them try and fail for, the way I have constructed the course, there is no penalty in failing, only in not trying. 

Clearly, here, too, the question is one of balance.  At some point, the connection between the game and the topic can be so abstruse as to be impossible to find except through dumb luck.  Likewise, simple connections do little to foster the sense of exploration and discovery I think is critical to this approach. 

Beyond these more or less common themes, I have received a wide variety of other suggestions (including some game recommendations) that I intend to examine in detail before the next time I teach the class.  Regardless of what changes, additions or deletions I make, the conclusion seems inescapable:  Games-based learning, while not a perfect pedagogical approach, has merit worth exploring when teaching strategic intelligence.

Next:
Wrapping it up
Enhanced by Zemanta

Thursday, July 1, 2010

Part 7 -- What Did The Students Think About It? (Teaching Strategic Intel Through Games)



The SIRs actually measure a number of variables and identifying those that might be most closely associated with the underlying pedagogy of a course are difficult to identify.  Instead, I chose to look at just one of the SIR-generated ratings, the Overall Evaluation of the course.  This is clearly designed to be an overall indicator of effectiveness.  A large change here (in either the positive or the negative direction) would seem to be a clear indication of success or failure from the student's perspective.

Furthermore, my assumption at the beginning of the course was that there would be a large change in one direction or the other.  I assumed that students would either love this approach or hate it and that this would be reflected in the SIR results.  The chart below, which contains the weighted average of the Overall Evaluation score (1-5 with 5 being best) for all classes taught in a particular year, indicates that I was wrong:

Clearly, while students did not love it, they did not hate it either.  The drop in score from recent years could be attributed to a reduction in satisfaction with the class or it could simply be attributed to the fact that the course changed from a fairly well-oiled series of lectures and exercises to something that had the inevitable squeaks and bumps of a new approach.  Feedback from the student surveys given after the course was over, while extremely helpful in providing suggestions for improving the class, gave no real insight into the causes of this modest but obvious drop in student satisfaction.

Comparing this chart with the previous one concerning the quality of the final product yields an even more interesting picture:
This chart seems to be saying that the more a student thinks they are getting out of class (as represented in their Overall Evaluation of the course) the better their final strategic intelligence project is likely to be.  This holds true, it seems, as long as strategic intelligence is taught through more or less traditional methods of lecture, discussion and classroom exercises.  Once the underlying structure of the course is centered on games, however, the students are less satisfied but actually perform better where it matters most – on real-live projects for real-world decisionmakers.

Taken at face value (and ignoring, for the moment, the possibility that this is all a statistical anomaly), a possible explanation is that the students don’t realize what they are getting “for free” from the games-based approach.  Other researchers have noted that information that had to be actively taught, assessed, re-taught and re-assessed in other teaching methods is passively (and painlessly) acquired in a games-based environment. 

I noted this effect myself in my thesis research into modeling and simulating transitions from authoritarian rule.  My goal, in that study, was to develop a predictive model; not to teach students about the target country.  One of my ancillary results, however, was that students routinely claimed that they learned more about the target country in three hours of playing the game than in a semester’s worth of study. 

This “knowledge for free” aspect of the games-based model was nowhere more obvious than in the fairly detailed understanding of the geography of the western part of the Soviet Union acquired by the students in all three classes while playing the boardgame, Defiant Russia.  While this information was available in the form of the game map, learning the geography was not explicitly part of the instructions.  Students rapidly understood, however, that they had to understand the terrain in order to maximize their results within the game.  Furthermore, an understanding of the geography of the western part of the Soviet Union was critical to the formulation of strategic options. 

This raises a broader question regarding games based learning:  If students don't know they are learning, how can they evaluate the learning process?  While I have not had time to dig deeply into the literature regarding implicit learning, I intend to.  Giving students a tangible sense of what they are learning in a game based environment may be one of the biggest challenges to overcome with the approach, at least in higher education.

Next: 
What else did you learn?
Enhanced by Zemanta

Wednesday, June 30, 2010

Part 6 -- So, How Did It All Work Out? (Teaching Strategic Intel Through Games)



While mostly anecdotal, the available evidence suggests that students significantly increased their ability to see patterns and connections buried deeply in unstructured data sets, my first goal.  This was particularly obvious in the graduate class where I required students to jot down their conclusions prior to class. 

An example of the growth I witnessed from week one to week ten would likely be helpful at this point.  The same student wrote both examples below and I consider this example to be representative of the whole:
Week 2 Response“This game (The Space Game: Missions) was predominantly about budgets and space allocation….  Strategy and forethought go into where you place lasers, missile launchers and solar stations, so that you don’t run out of minerals to power those machines and so repair stations are available for those that are rundown.  It’s clear that resource and space allocation are key elements for a player to win this game, just as it is for the Intelligence Community and analysts to win a war.”
Week 8 Response “I think if Chess dropped acid it’d become the Thinking Machine. When the computer player was contemplating its next move colorful lines and curves covered the board… To me, Chess was always a one-on-one game; a conflict, if you will, between black versus white… Samuel Huntington states up front that he believes that conflict will no longer be about Princes and Emperors expanding empires or influencing ideologies, but rather about conflicts among different cultures:  “The great divisions among humankind and the dominating source of conflict will be cultural.” Civilizations and cultures are not black and white, however; they’re not defined by nation-state borders. There are colors and nuances in culture requiring a change in mindset and in strategy to approach these new problems.”
While difficult to assess quantitatively, literature from the critical thinking community helps assess the degree of change here.
Note:  There is a widespread belief among intelligence professionals that teaching critical thinking methods will improve intelligence analysis (See David Moore’s comprehensive examination of this thesis in his book Critical Thinking And Intelligence Analysis).   A minority of authors are less willing to jump on this particular bandwagon (See Behavioral and Psychosocial Considerations in Intelligence Analysis: A Preliminary Review of Literature on Critical Thinking Skills by the Air Force Research Laboratory, Human Effectiveness Division) citing a lack of empirical evidence pointing to a correlation between critical thinking skills and improved analysis.
In particular, the Guide to Rating Critical Thinking developed by Washington State University identifies seven broad categories for assessing if and to what degree critical thinking is taking place: 

-          Identification of the question or problem
-          Willingness to articulate a personal position or argument
-          Willingness to consider other positions or perspectives
-          Identification and assessment of key assumptions
-          Identification and assessment of supporting data
-          Considers the influence of context on the problem
-          Identifies and assesses conclusions, implications and consequences

While such a list may not be perfect, there is certainly nothing on it that is inconsistent with good intelligence practice.  Likewise, when reading the representative example above with these criteria in mind, the increase in nuance, the willingness to challenge an acknowledged authority, the nimble leaps from one concept to another all become even more obvious. The growth evident in the second example is even more impressive when you consider that the Huntington reading was not required.  The majority of the students in the class showed this kind of growth over the course of the term both in the quality of the classroom discussions and in their written reports.

In addition to seeing an improvement in students’ ability to detect deep patterns in complex and disparate data sets, I also wanted that increased ability to translate into better quality intelligence products for the decisionmakers who were sponsoring projects in the class. 

Here the task was somewhat easier.  I have solicited and received good feedback from each of the decisionmakers involved in the 78 strategic intelligence projects my students have worked on over the last 7 years.  This feedback, leavened with a sense of the cognitive complexity of the requirement, yields a rough but useful assessment of how “good” each final project turned out to be.

Mapping this overall assessment onto a 5 point scale (where a 3 indicates average “A” work, a 2 and 1 indicates below and well below "A" work respectively, a 4 indicates "A+" or young professional work and a 5 indicates professional quality work), permits a comparison of the average quality of the work across various years.  
Note:  “A” is average for the Mercyhurst Intelligence Studies seniors and second year graduate students permitted to take Strategic Intelligence.  In order to be employable in this highly competitive field, the program requires students to maintain a cumulative 3.0 GPA simply to stay in the program and encourages students to maintain a 3.5 or better.  In addition, the major is widely considered to be “challenging” and those who do not see themselves in the career of intelligence analysis, upon reflection, often change majors.  As a result, GPAs of the seniors and second year graduate students who remain with the program are often quite high.  The graduating class of 2010, for example, averaged a 3.66 GPA.
The chart above summarizes the results for each year.  While the subjectivity inherent in some of the evaluations possibly influenced some of the individual scores, the size of the data pool suggests that some of these variations will be eliminated or at least smoothed out through averaging.

There are, to be sure, a number of possible reasons to explain the surge in quality evidenced by the most recent year group.  The students could be naturally better analysts, the quality of instruction leading up to the strategic course could have dramatically improved, the projects could have been simpler or the results could be a statistical artifact.

None of these reasons, in my mind, however, hold true.  While additional statistical analysis has yet to be completed, the hypothesis that games-based learning improves the quality of an intelligence product appears to have some validity and is, at least, worthy of further exploration.

My third goal for a games-based approach was to better lock in those ideas that would likely be relevant to future strategic intelligence projects attempted by the students, most likely after graduation.  To get some sense if the games-based approach was successful in this regard, I sent each of the students in the three classes a letter requesting their general input regarding the class along with any suggestions for change or improvement.  I sent these letters approximately five months after the undergraduate classes had finished and approximately 2.5 months after the end of the graduate class.

Seventeen of the 75 students (23%) who took one of the three courses responded to the email and a number of students stopped by to speak to me in person.  In the end, over 40% of those who took the class responded to my request for feedback in one way or another.  This evidence, while still anecdotal, was consistent – games helped the students remember the concepts better.

Comments such as, “Looking back, I can remember a lot of the concepts simply because the games remind me of them” or “I am of the opinion that the only reason that the [lessons] stood out was because they were different from any other class most students have taken” were often mixed in with suggestions on how to improve the course.  The verbal feedback was even more encouraging, with reports of discussions and even arguments centered on the games and their “meaning” weeks and months after the course was completed.

The evidentiary record, in summary, is clearly incomplete but encouraging.  Games–based learning appears to have increased intelligence students’ capacity for sensemaking, to have improved the results of their intelligence analysis and to allow the lessons learned to persist and even encourage new exploration of strategic topics months after the course had ended.

Next:  
What did the students think about it?
Enhanced by Zemanta