Friday, December 7, 2012

How Many Entry-level Analysts Will The US IC Need In 2013? (Survey)

Good question, right? 
 
If you have direct knowledge of information that might help answer the question in the title or you have indirect knowledge that is relevant to the answer to the question in the title, please take 2 minutes to complete this survey. 
 
What do I mean by direct and indirect knowledge?
Direct knowledge means that you know personally or have good information concerning the hiring plans of your agency or organization (or at least your section or division).  You might work in HR or be a manager with hiring responsibilities. 
Indirect knowledge is information that is relevant to the question that is not due to your direct responsibilities.  You might have spoken with an HR manager or have been involved in meetings where this issue was discussed. 
We are NOT looking for opinion based on purely circumstantial information.  If you are not involved in the hiring process either directly or indirectly, please DO NOT take this survey.

Why are we interested?

Every year, other disciplines announce hiring projections for the year:  "This year's hot jobs are for engineers and chimney sweeps."  That sort of thing.  Entry level intelligence analysts who are searching for a job, on the other hand, receive no such guidance.

We hope to change that.  Working with one of our hot-shot grad students, Greg Marchwinski, we put together this survey to get a better feel for the the job market for entry level analysts for the year ahead.

Once we get enough survey data, Greg will compile it and combine it with the macro-level, mostly qualitative data that we already have and put together a "jobs report" for the year ahead.  I will publish it here once we are done.

We understand that there are some legitimate security concerns here so we have tried to frame the questions such that they are focused on broad developments and general trends.  We are not interested in the kind of deep details that might compromise security.

Finally, we intend to follow this study up with similar surveys of the law enforcement and business job markets for entry-level intelligence analysts as well.

By the way, this is the same question we asked last year and here is the answer we got...

Thanks for your participation!

Monday, November 19, 2012

Want To Do Something Different This Thanksgiving?

Some of the action on DAGGRE.org
Eating, shopping, driving, yelling, more eating...I get it.  What is Thanksgiving without its traditional activities?

But, if you were looking to do something a bit different, you might want to check out the recent updates to the DAGGRE prediction market.

(I know, I know, I am such a shill for DAGGRE...)

Yes, in the interests of full disclosure, I am a member of this IARPA funded research project and yes, I probably love it way too much but the most recent changes to our site's software are, frankly...cool.

Real cool.  Like never before seen cool.

Why?  Because now you can do real-world, real time linchpin and driver analysis in a prediction market.

I am pretty excited about this but it might not be obvious why, so let me break it down.  Prediction markets typically work by asking resolvable questions like "Will Despot X be out of power by 31 DEC 2012?"  Then various people interested in that question will go into the market and make "edits" that assess the probability of the answer to this question being yes or no.  For example, if I thought Despot X would almost certainly be out of power by the end of the year, I would go in and change the probability to, say, 90%.  Someone else might go in after me and think, "What an idiot!" and change it back to 25%.  When 31 DEC rolls around one of us will be closer to right and the other closer to wrong.  The one closest to right scores the most points on the market.

This system works pretty well with straightforward questions that obviously lean strongly in one direction or another (EX:  "Will a shooting war break out between the US and Canada before 31 DEC 2012?"  Uh...no.).  It works significantly less well with more nuanced questions that really deserve to be teased apart.

This is essentially what the CIA's Deputy Director For Intelligence, Douglas MacEachin, was trying to do back in the early 1990's when he insisted on having his analysts identify linchpins and drivers.  To quote an early version of the CIA Tradecraft Manual, drivers are "key variables...that analysts judge most likely to determine the outcome of a complex situation" while linchpins are "the premises that hold the argument together and warrant the validity of the conclusion." 

This kind of analysis is pretty sophisticated stuff and it really is what makes the difference between a bald estimate ("X is 80% likely to happen") and the kind of estimate most decisionmakers expect ("Despite A and due primarily to B and C, X is 80% likely to happen.").

Before, you simply couldn't do this kind of stuff in a prediction market.  Now, with the most recent upgrade to the DAGGRE market, you can.  Let's go back to our Despot X example.  We know that rebel forces are trying to take a key city in this despot's country.  We also strongly believe that if the rebels take the city the despot's days are numbered.  Before, our estimate of the despot's longevity was a mish-mash of many factors, one of which was the possible(?)/probable(?) fall of the key city and our estimate probably was much more wishy-washy than we wanted it to be.

The DAGGRE market now lets you not only make estimates on both how likely the city is to fall and on how likely the despot is to stay in power but also allows you to make the answer to one of these questions, an assumption for the other (EX:  "Assuming key city falls, Despot X is 90% likely to be out of power by 31 DEC 2012.").

In my mind this is a huge step forward in making prediction markets more useful to real-world analysts working on real-world questions.  Definitely worth taking a few minutes to check out over Thanksgiving!

Sunday, November 11, 2012

An Interesting Perspective On What It Means To Be A Vet

If you have a few minutes this Veterans' Day, this video is worth your time...


Monday, November 5, 2012

What Can Intelligence Expect From Prediction Markets?

Opening screen of the DAGGRE.org prediction market
Prediction markets have long been touted as tools that have a wide variety of potential uses for intelligence professionals.  Far more accurate in many cases than expert judgement alone, these markets tend to incentivize good thinking and punish poor thinking in ways that, over time, produce quantifiably better results on topics like elections and sales forecasts.  Strong advocates of this method have even suggested that these markets might be able to replace traditional analysts entirely.

Naysayers have (and will likely continue...) to argue that the kinds of questions asked of intelligence professionals do not lend themselves to pat, numerical estimates.  Furthermore, they will say, even in the handful of cases where such answers would be of potential use, combining the estimates of people who know little to nothing about the details of a particular, narrow problem -- the kind that is usually of intelligence interest -- will only serve to create an estimate that is also of little to no use.  Finally, while these estimates are useless in forecasting the future (or so the naysayers will say), they will serve to anchor both intelligence professionals and policymakers alike, reducing their ability to see alternatives to the predicted outcome.

The purpose of this series of posts, then, is to explore both sides of this argument, to look at prediction markets from the point of view of the intelligence profession and in light of ongoing research and to come to some preliminary conclusions about the future of prediction markets as a tool for the working intelligence professional and the decisionmakers they support.

Informing this series will be the results of research done by the DAGGRE prediction market and the scientists involved in that effort.  DAGGRE is run by Dr. Charles Twardy of the C4I Center at George Mason University and is working in cooperation with a number of other universities and organizations (including Mercyhurst University and yours truly) to better understand prediction markets and their potential uses to the intelligence community.  

The DAGGRE project is one of five such projects funded by the Intelligence Advanced Research Projects Activity (IARPA) under their Aggregative Contingent Estimation (ACE) program.  Now in its second year, ACE has already produced a number of interesting results and promises to produce many more.  

In short, whatever your initial reaction is to the idea of prediction markets in intelligence, this series is designed to give the working intelligence professional inside access to some of the most interesting and intriguing results from research currently being done on prediction markets and intelligence questions.  My goal is to turn these results into “plain English” so that you can have an informed opinion about these unique tools.

Next Week:  What Is A Prediction Market?

Wednesday, October 24, 2012

The New HUMINT?

A few months ago, I wrote an article on the Top 5 Things Only Spies Used To Do (But Everyone Does Now).  In that article I stated that one of those things (the #2 thing, in fact) was to "run an agent network."

I equated our now everyday activity of finding and following various people on LinkedIn or Twitter to the more traditional case officer activities of spotting, vetting, recruiting and tasking agents.

While I meant that article to be a bit lighthearted, over the last several months I have been exploring this idea with some seriousness in a class I am teaching with my colleague, Cathy Pedler, and a group of very bright grad students.



The picture above gives you an inkling of the progress we have made.

In this class (called Collaborative Intelligence - "How to work in a group while learning how groups work"), we have focused our energies on critical and strategic minerals.  I have already written about this course (if you want more details go here), but suffice it to say that, recently, we decided to use our new-found skills in social network analysis to see if we could solve a traditional HUMINT problem:  "Who should we recruit next?"

Every case officer knows that their agents' value are not only measured in terms of what they know but also in terms of who they know.  Low level agents with an extensive network of contacts within a targeted area of interest are obviously valuable, perhaps even more valuable than the recluse with deep subject matter expertise.

Complicating the case officer's task, however, is the jack-of-all-trades nature of the traditional HUMINT collector.   Today, the collector needs to tap into his or her agent network to get economic information; tomorrow, political insights; the next day the need is for information to support some military or technological analysis.

Only an expert case officer with deep contacts can hope to be able to respond to the wide variety of requests for information.  In today's fast moving, crisis-of-the-day type world, the question becomes "Where can I find good sources of information ... on this particular topic ... quickly?"

Twitter to the rescue!

You see, the image I referred to earlier began as the 11 lists of Twitter users the 11 students in my class were currently following as they studied critical and strategic minerals.  The students had found these Twitter users the old fashioned way - they bumped into them.  That is, they found them on blogs or in news articles that talked about strategic mineral issues and they followed them on Twitter in order to stay current on their postings.  Since each of the students has a slightly different portfolio (the students are broken into three teams, national security, business and law enforcement and then, within those teams, each student has an area of specific interest), their lists have some common sources but many different ones as well.

The natural next question is, "Who are my sources of information following?"  Using NodeXL to collect the data and ORA to merge, manage and visualize it, the students rapidly discovered who their "agents" were following.  Furthermore, we were able to discover new people to follow -- Twitter users that many people on our initial lists were following (implying that they were potentially very good sources of information) but that the students had not yet run across in their research.

The picture got even more interesting when we merged the results from each of the students.  Once we cleaned up the resulting picture (eliminated pendant nodes, color coded the remaining Twitter users by team, etc), the students had identified over 50 new sources of information -- Twitter users who were posting information relevant to the issue of strategic minerals and vetted by many of the Twitter users we had already identified -- that we had never heard of.  You can see this more focused set of Twitter users in the image below.



While this sounds exciting (and it was, it was...), trying to listen to over 50 new voices seemed to be a bit overwhelming.  The question then became, "Of these 50, which are the 'best'?"

The traditional answer involves following all of them and then, over time, sorting out the wheat from the chaff.  Most people don't have that kind of time; we certainly didn't.  We needed another way to sort them and, thankfully, Twitter itself provides some potentially useful answers.

The first answer, of course, is to look at the number of "followers".  This is the number of Twitter accounts that claim to follow a particular person or organization.  In general, then, the sheer number of people who are following a particular person is a rough measurement of their influence and, by consequence, importance to a conversation on a particular topic.

Most twitterati don't put much credence in gross tallies of followers, though.  Anyone with a twitter account knows that only a relatively small number of their followers are actively engaged with the medium.  Some studies have also indicated that a third or more of these followers are fake or, even worse, bought and paid for.  While this is typically true on some of the most widely followed accounts and is significantly less likely to be true among the people who are tweeting about rare earths, for example, it is still a cause for concern.

Twitter again offers a solution to this problem but it takes a little work to get it.  The key is Twitter's List feature.  Twitter allows users to create lists of people; subsets, if you will, of the larger group of people a particular user might follow.  For example, I have a list of competitive intelligence librarians (there are actually quite a few on Twitter).  Lists are a way for people to follow hundreds or thousands of people but narrow and focus that chorus in a way that is most useful for them.  It allows the savvy Twitter user to filter signal from noise.

Twitter allows a user to not only look at their own lists but to know how many lists other people have created with their name on it.  This is important because it takes time and effort to create and curate a list.  It is almost certain that you have not been placed casually on a list.  Being placed on a list is an indicator of credibility; being on lots of lists even more so.  Like followers, though, the number of lists is still pretty rough and does not give the best sense of the value of a particular Twitter user to his or her followers.  Thus, while the number of lists you are on is not a bad indicator, many people like to use the list-to-follower ratio to assess overall credibility.

In other words, if you had 1000 followers and every one of them had placed you on a list, you would have a list-to-follower ratio of 1.  If only 500 had placed you on a list, then your list-to-follower ratio would be .5.  In practice, list-to-follower ratios of .1 are rare.  Based on my experience a list to follower ratio of .05 is very good and a list to follower ratio of .03 or lower is more typical.

While I am certain that there are automatic ways to collect the data you need from Twitter, we simply crowdsourced the problem.  Dividing the list into 11 pieces, we were able to quickly and accurately collect and deconflict the various data we needed including number of lists and number of followers.  In the end, we were able to rank order the 50 top Twitter users talking about Strategic Minerals in a variety of useful ways.  In all, including the teaching, it took us only about 6 hours to get from start to Top 50 list (For the complete list and more details go here)..

And here is where the analogy breaks down...

Up to this point, we were able to fairly confidently connect traditional HUMINT ideas and activities with what we were doing, much more quickly, using Twitter data.  The analogy wasn't perfect but it seemed good enough until we put the students -- the "case officers" -- into the network.  They stuck out like sore thumbs!

Case officers in traditional HUMINT networks need to be working from the shadows, pulling the strings on their networks in ways that can't be seen or easily detected.  Trying to lurk on Twitter in this sense just doesn't work, however.  My students, who are following many people but are not followed by many, became very obvious as soon as they were added to the network.  The same technology that allowed us to rapidly and efficiently come up with a pretty good first cut at who to follow on Twitter with respect to strategic minerals, allows those same people to spot the spammers and the autofollow bots and the lurkers and even the "case officers" pretty easily.

Back in my Army days we used to say, "If you can be seen you can be hit.  If you can be hit, you can be killed."  Social media appears to turn that dictum on its head: If you can't interact, you can be spotted.  If you can be spotted, you can be blocked.

It turns out, it seems, that the only way to be hidden on Twitter is to be part of the conversation.

Friday, September 28, 2012

Top 5 Books Every Intel Professional Should Read (But Have Probably Never Heard Of)

There are tons of great reading lists for intelligence professionals.  The CIA has a list, The National Intelligence University has a list, The Marine Corps and other military institutions have lists; even intelligence professionals in the business community have lists.

I have noticed, however, that, oftentimes, these lists contain many, if not all, the same books.  Everyone recommends Heuer, everyone recommends Sun Tzu, everyone recommends something of regional or topical interest and for good reason -- these are great books.

Over the last several years, though, I have identified a number of books that I think every intelligence professional ought to read ... but aren't yet on anyone's list.  Typically these are not books about intelligence, or, at least, were not intended primarily for the intelligence audience but still have deep meaning for intelligence professionals in all of the various sub-disciplines.

Without further ado (and in reverse order):

#5 The Lady Tasting Tea:  How Statistics Revolutionized Science In The Twentieth Century.  If you are like me, you probably did not much care for statistics in college.  That is probably because you did not have this book to read.  It is an absolutely fascinating book that tells the story of modern (frequentist) statistics.  Nothing I have read helps put the numbers in context -- what you can get from traditional stats and what you can't -- better.

#4 The Theory That Would Not Die:  How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines and Emerged Triumphant From Two Centuries of Controversy.  Just the title ought to catch the eye of most intel professionals.  Bayes, for those of you unfamiliar with the theory, is the other side of the statistical coin - a different way of doing and thinking about stats that is probably more useful for intelligence than traditional, frequentist, approaches.  This very readable book is a great introductory volume for those who know nothing about Bayes. 

#3 How To Measure Anything:  Finding The Value Of Intangibles In Business.  While this is pitched primarily at the business audience, it really isn't a business book.  It is really a book about how to think about problems creatively.  While there are many tangible strategies discussed in Hubbard's fine volume, it is the attitude that Hubbard has as he approaches seemingly intractable problems that I find most compelling here.  It is a nearly perfect approach for intel professionals confronted with wicked problems.

#2 Expert Political Judgment: How Good Is It?  How Can We Know?  What is the correlation between forecasting accuracy and years of experience?   .00.  Between forecasting accuracy and education?  .02.  Between forecasting accuracy and access to classified information?  .02  In other words, almost none.  Philip Tetlock's 2005 bombshell of a book is still not as widely read as it needs to be by intel professionals.  Whether you ultimately agree or disagree with his findings, it is a must read.

#1 Collaborative Intelligence:  Using Teams To Solve Hard Problems (Lessons From And For Intelligence Professionals).  Hackman, like Tetlock, has spent the better part of a decade researching his subject (in this case small teams of intel analysts).  His findings and recommendations about how to structure and manage intel professionals charged with solving difficult analytic problems in challenging environments where collaboration is required are essential reading.  In a world that constantly talks about collaboration, Hackman has done the hard work to lay out a roadmap about how it can and should be done most effectively.

How about you?  Do you have a favorite book that you think ought to be read by intel professionals but no one ever talks about? Leave it in the comments!

Tuesday, September 18, 2012

Strategic Minerals, Collaboration, Intelligence And...Oh, Yeah...Twitter!

http://strategicminerals.blogspot.com/
I am currently team teaching a class called Collaborative Intelligence with one of our adjuncts, Cathy Pedler.  I like to say that the purpose of the class is to explore "how to work in groups and how groups work."

Specifically, we are tapping into our own research and experience working with small groups of analysts (as well as the research of others) to teach students how to optimize group work processes with particular emphasis on group work in virtual or distributed environments.  In addition, we are also teaching them how to collect useful information and produce analysis using a variety of online and social media tools.  For this part of the class, we are emphasizing social network analysis as a core methodology.

In order to give the class some focus, Cathy and I decided to have the students take a hard look at strategic minerals (such as the "rare earth elements").  In order to share the results of our efforts, we also created a class blog, Strategic Minerals, where students could post both some of their collected information and some of their analysis for others to examine and comment upon.

On the blog you will find a couple of different kinds of exercises.  First, there are INTSUM-like entries that summarize recent news articles but add snippets of commentary or analysis (Note:  For those who have not tried it, blogging software is a nearly perfect way to replace traditional INTSUMs.  You get all of the benefit and none of the costs of creating them the old-fashioned way).

Second, there are classroom exercises, like our recent effort to build a down-and-dirty model of the non-chemical relationships between the various strategic minerals using social network analysis.  Third, and most recently, we have been posting some of our (very preliminary) analysis of the impact of trends in these minerals on national security, law enforcement and business interests in the US.

While none of our current analytic efforts are very sophisticated (Don't worry:  We will get better), how we are producing these results is likely to be as (or more) interesting to many of you as our analysis.  For example, the most recent assignments required the students to produce their analysis without any face-to-face interaction.  Instead, they had to use nothing but the suite of collaborative tools we had been discussing (and using) in class.  If you take a look at the "Methods and processes" section of these most recent reports, you can see how well this worked, what problems they had to overcome, and how they went about making the reports happen.

In the coming weeks we will be diving much deeper into social network analysis, talking a lot more about group dynamics, learning how to use Twitter, Pintrest, Facebook and other social media as collection tools, and producing increasingly complex reports involving larger and larger groups of analysts.

It promises to be an interesting term.  We hope to learn something about strategic minerals but more importantly, we hope to learn how to work in groups and how groups work. 

Follow along at Strategic Minerals!

Tuesday, September 4, 2012

Myth #3a: I Want To Make A Game That Teaches... (The 5 Myths Of Game-based Learning)

Part 1:  Introduction
Part 2:  Myth #1:  Game-based Learning Is New 
Part 3:  Myth #2:  Games Work Because They Capture Attention 

Part 4:  Myth #3:  I Need A Game That Teaches...

(Needless to say, it has been a strange August.  Thanks for the well wishes and notes of concern.  Hopefully, I am back at it...)

You have a PhD (or you have just been teaching a subject for quite some time) and you like games.  If no one has bothered to make a game that happens to teach anything remotely related to your subject matter, why not just make your own game?  

I have already made the point that good game design is hard (if you want to get an idea of how hard, check out Ian Schreiber's excellent 20 part series:  Game Design Concepts).  Teaching is also hard which makes designing a game that teaches a real...well, you get the point.

None of that is going to deter some of you, though.  If you are still bound and determined to design a game that teaches, whatever you do, don't try to make it a video game.  I have nothing against video games, but they have three strikes against them when it comes to teaching.

Strike One:  Even inexpensive video games cost a ton to make. According to the Casual Games Association, the least expensive games to develop (such as the ones on Facebook) still cost between $50,000 and $400,000.  Large scale games (such as Call of Duty or Mass Effect) can exceed $30 million. No educator has that kind of money laying around for course development. 

Strike Two:  Video games have a very short shelf life.  The technology is advancing so quickly that very few video games hold up well over time.  Most start to look their age within a year or two and many feel old and clunky within 3-4 years.  To get a sense of this drop off, take a look at the steep discounting that typically takes place on video games within the first few years of life:

video game price lifecycle
http://blog.pricecharting.com/2012/03/lifecycle-of-video-games-price-30-years.html
Even if you can design a great game that teaches, if it is a video game, you will have to work pretty hard to keep the game looking fresh and up to date.

Strike Three (A):  A single video game will typically not have enough content to fill a course.  Two of my favorite games of the last year were Portal 2 and Kingdoms of Amalur.  I play both of these games through Steam (for those of you not familiar with Steam, it is like an iTunes for games.  Just like iTunes, it lets you download content directly to your PC and just like iTunes it keeps track of your statistics for you -- how long you play, what you play, how much you like a game, etc).  Steam says I logged 17 hours playing Portal 2 and 101 hours playing Kingdoms of Amalur.  

Both games (which I purchased on sale) provided excellent value for money in my opinion.  Portal 2 is one of the highest ranked games ever and was immensely fun.  Kingdoms of Amalur was designed to be a much lengthier game and was equally fun to play (though many reviewers did not think so...). With an average university course requiring approximately 45 classroom hours and, depending on who you talk to, 2:1 to 4:1 hours outside studying to inside of class, it is arguable (in a rough order of magnitude sort of way) that only video games on the scale of Kingdoms of Amalur could hope to fully replace even a single university course.

Strike 3 (B):  Even if the content is there, relatively few players actually finish video games.  Consider the two games I mentioned above.  Portal 2 is one of the highest rated games of all time.  Players and reviewers loved it.  Heck, I loved it.  I played every level and received every "Achievement" - little electronic tokens of accomplishment that players collect throughout the game.  Steam, of course, keeps track of "Achievements".   Typically, there is at least one achievement associated with completing the main part of the game.  In the case of Portal 2, that achievement is called "Lunacy" (play the game and you will understand why).  I have received this achievement and truly enjoyed the process of getting there.

What is really interesting, though, is that Steam allows me to compare my achievements with the millions of other players who have also played the game.  Only about 56.4% of those who have played the game through Steam have received the Lunacy Achievement.  That is actually a pretty stunning statistic when you consider this is one of the best rated games ever, players presumably volunteered/wanted to play the game and they had to pay between $30 and $60 for the privilege.  It is even harder to imagine a successful class where only 56% of those who start it, finish it.  Kingdoms of Amalur is in an even worse position.  Here only 18.1% of those who started the game played through to the final achievement, "Destiny Defiant". 

**********

OK, so its not as bad as I make it look.  I will readily acknowledge that many of the arguments I make are not as strong as they appear to be.  Indie game designers are bringing extraordinary labors of love to the attention of the masses every day.  The overwhelming success of video games like Minecraft, Braid and Bastion are testaments to what creative people can do on a shoestring.  Likewise, even if one of today's games can't fill a course or routinely get played to completion, you, Kris Wheaton, are the one who said we would have to have multiple games for our courses anyway.  Besides, just because the games aren't here today, does not mean that we shouldn't keep trying.

Exactly.  My point is not to deter game-based learning approaches -- I believe in them wholeheartedly!  My goal is to let teachers know that the process is not as easy and straightforward as it appears.  This is truly a "hard problem" and hard in two fields, game design and education.

I believe the problem will be solved but what are we to do in the meantime?  I recommend two strategies for teachers.  First (and this is the one I use in my Strategic Intelligence class), look for great games that already exist that can teach, reinforce or supplement one or more of your learning objectives.  Second, if you must design your own game, make it a board or card game.  These cost significantly less to design and produce and require much less equipment to play.  They are easier to fit into the constraints associated with a normal 1-2 hour class and, for intelligence professionals, at least, are simply easier to get into the building!

Next:  Myth #4:  The Learning Objectives Come First

Thursday, August 23, 2012

Reader Recommended: Intelligence And Art

Look carefully and you can see the lake in the distance.
(Having survived both a surgery scare and the move to our new digs on the hill this summer (see pic for the view out my window...), I am playing catchup.  Rather than continue to post nothing at all, I thought I would go back and re-publish some articles that received good -- or, at least, interesting -- feedback from readers.  Enjoy! - K.)

Wired magazine recently highlighted Kryptos, the James Sanborn sculpture sitting in the middle of the CIA (see the image on the right). While most intel professionals are very familiar with the story behind Kryptos, the article got me thinking again about intelligence and art.

I don't mean to suggest anything as highbrow as "intelligence art" and certainly am not talking about the largely meaningless discussions that tend to revolve around the question "Is intelligence an art or a science?"

I mean the resonance I feel with a certain piece of art when I look at it and contemplate the profession I study.

Probably the most direct example of this is the work of Mark Lombardi. Lombardi is famous for his hand-drawn link diagrams of real events and supposed connections (see the image on the left). It is hard to look at his pieces and not sense that, at least for a while, you have been walking the same path together.

He reportedly committed suicide due to the depression and anger he felt after one of his creations was destroyed when the sprinkler system unexpectedly went off in his apartment (a sentiment shared by any Mercyhurst students who have ever lost their link diagram to a bad flash drive or a computer crash...).

Similar in some ways to the work of Lombardi are the intricate and wholly abstract three dimensional artworks of Janice Caswell. I love the way her work flows across walls and corners. It is almost as if she has developed an intricate analysis of all of the connections represented by some real world event and then removed the names of all of the actors and actions.

Her work (see an example on the right) goes directly to a point I try to teach my students, though. We tend to hyperfocus on the facts and assumptions and logic -- the hard data -- inherent in whatever we are attempting to analyze.

Whenever we try to visualize that information and analysis, however, we are also tapping into the nonlinear and largely inarticulate parts of our brains. Why did you put that in the center of your diagram? Why is his picture so large? Most of the connections seem to go around the sides of your nodes. Is that significant? Caswell validates, for me, the potential importance of listening to that subconscious voice, to try to hear what the quiet parts of my brain are trying to tell me.

(By the way, if you like Caswell's art as much as I do, you should check out the 57 other artists featured at VisualComplexity.com).

Another artist whose sculptural art echoes some of my own emotions when working on intelligence products are the paper-cut models of Jen Stark. These are really quite amazing constructions using nothing more than colored paper, patience and enormous creativity. I think I find them appealing because of the intricate layering and the odd angles and turns her works take (see an example to the left).

The relationship of the last two artists, Paula Scher and Timothy Hutchings, to intel is easy to see -- its geographic. Scher, who I first saw at The Serious Play Conference last year, does these magnificent renderings of geography that are both very close and very distant to what it is that I study. To get a sense of this tension, I suggest that you take a look at some of the closeups of her work (see the map of South America on the right).

Hutchings, on the other hand, does many different things with all sorts of materials (much of it abstract). The parts of his work that draw me closest, however, are the very familiar terrain tables (see an example below) he builds. It is hard to imagine, for most old Army guys like me, that the humble terrain table can be a work of art but Hutchings, in my mind, has done just that.

How about you? Is there anything or anyone's art you look at and think, "That feels like my job?" If so, post it to the comments...
 
Originally published May 8, 2009.

Monday, July 23, 2012

Myth #3: I Need A Game That Teaches... (The 5 Myths Of Game-based Learning)

Part 1:  Introduction
Part 2:  Myth #1:  Game-based Learning Is New 
Part 3:  Myth #2:  Games Work Because They Capture Attention

"I'd love to use game-based learning in my classes but I need a game that teaches..." organic chemistry, quantum physics, SIGINT, whatever.

I hear this quite often and it is a legitimate concern.  So many things to teach and so few game designers and publishers willing to take them on. Before I answer why this is, let's assume, for the sake of the argument, that all of the administrative and regulatory hassles involved in designing a game that teaches could be overcome (These are not trivial.  On the contrary, I suspect that these kinds of issues are a big part of the reason that game-based learning strategies have not been more widely tested and applied).  Let's also assume that there is a business model that makes these kinds of games profitable to produce and distribute (another non-trivial assumption).

What's left?  Just building a great game and, at the same time, making sure the course content is integrated into it.   If this sounds really hard, it is.

And its just the beginning.

Because the reality is that you don't need a single great game that teaches these concepts, you really need multiple games that teach.  It turns out that game-based learning is plural.

If, to be successful, game-based learning needs to be, at least to some extent, voluntary (and particularly if you accept the premise, as I do, that the more voluntary the game play is, the more learning will occur), then it makes sense that you will need more than one game covering the same topic to fully engage a diverse classroom full of learners.

To explain this as simply as I can, I often ask people to imagine a typical elementary classroom.  If I only have one great game, let's call it "Barbie Math", I suspect that I may only engage approximately one-half of the students.  I probably need another great game, let's call it "GI Joe Math", to get the other half.  This grade school example is about as simple as I can make the problem but it is potentially much, much worse because of "fun". 

Most game designers I know hate the word "fun".  They hate this word because it is so indistinct and overused that it has virtually lost its meaning.  To say a game is fun (or not fun) is, in short, not very useful criticism.  There are lots of ways games can succeed or fail to produce fun generally and, more relevant to games that teach, specifically for individual students. 

The best place to start to get a sense of this problem from a game design perspective is Raph Koster's A Theory of Fun.  Koster lays out the problem pretty clearly and his book is widely used as a text and cited by professionals. 

To get an even more practical view of the problem, I like Pierre-Alexandrre Garneau's 14 Forms Of Fun article for the online magazine, Gamasutra.  Here Garneau outlines 14 different ways that a game can be fun along with a number of examples of how each element worked in a game (see list to right).  This list has not been scientifically validated and I am sure that, if we got 10 game designers or gamers in a room, there would be lots of disagreements about this list.

I like it, however, because it makes a good case for thinking about fun, and, by extension, about what makes a great game more broadly.  If I think about what I like in a game, I can better see it in this list.  I don't just like the game Portal 2 because it is fun, I like it because it is a witty, immersive game that focuses on intellectual problem solving, advancement and completion (If you are not familiar with the Portal franchise, watch the video below.  It doesn't give much sense of the gameplay but it does give a good sense of the humor in the series).  Moreover, once I know why I like what I like, I can use this system, in much the same way the Music Genome Project worked for music, to help me think about other games I might like to play.

My preferences might not be my students' preferences, however.  It is easy to imagine a student or students that prefer the exact opposite -- I may like cooperative games; they prefer competitive games.  I may like beautiful, discovery games like Myst but they like beautiful, thrill of danger games like Batman:  Arkham City.

We are still just scratching the surface.  What about genres of games?  Some will only like sports games while others will prefer action titles.  What about themes?  Some like high fantasy (like Lord of the Rings Online) while some prefer space based games (Like Eve Online). And what about students who cannot define what they like ("I hate math and statistics and besides I have to spend this entire weekend preparing for my fantasy football draft...")?

These differences have focused on gaming style but even more important are  teaching concerns.  Different students are known to learn differently -- sometimes dramatically.  Text based games, for example, no matter how compelling, may be inaccessible to dyslexic students. 

I know it may sound like I am trying to paint a picture that game-based learning is a herculean, almost impossible task.  That is just because I am a lawyer and creating a "parade of horribles" is what we do.  Many of these distinctions probably matter far less than the discussion so far might lead you to believe.  Some might not matter at all.  Gamers tend to have broader rather than narrower tastes in games.  For every student who only plays sports games, for example, there are likely many more who play both sports games and high fantasy games.  Likewise there are a number of strategies for overcoming almost all learning differences and many could likely be applied to games.

I recognize and accept these objections.  My goal here is simply to paint a more nuanced picture of the challenges teachers and game designers face when they try to take games into the classroom.  There is a naivete in the statement "I need a game that teaches..." that nothing in my experience justifies.

I hope my observations will resonate with the comments made by James Shelton at the Games For Change conference last year (see the video in Part 1 of this series):  In order for game-based learning to go mainstream, it has to scale.  It can't just work with a self-selected population; it has to work across demographic lines and socioeconomic lines and learning differences lines.  This likely means that whatever course or subject you are teaching, you will need multiple games to fully engage your entire class.  A single game is unlikely to do it all.

Next:  Myth 3a:  I Want To Make A Game That Teaches...

Wednesday, July 18, 2012

Myth #2: Games Work Because They Capture Attention (The 5 Myths Of Game-based Learning)

Part 1:  Introduction
Part 2:  Myth #1:  Game-based Learning Is New

Eyes wide and focused.  Body oriented directly towards the screen.  An apparent inability to hear, even when being shouted at by mom.  If you have ever seen a person play a game that they really enjoy, you know that games have the ability to command complete attention.

Scientists tend to say things like this about the connection between attention and learning:

The assumption that attended stimuli are encoded more effectively into memory than less attended ones is straightforward and supported by substantial evidence (Sarter and Lustig).
or, more obtusely:
Neural models of perception and cognition have predicted that top-down attention is a key mechanism for solving the stability-plasticity dilemma, which concerns the fact that brains can rapidly learn enormous amounts of information throughout life without just as rapidly forgetting what they already know (Grossberg).
What all this means is what any teacher already knows -- attention is the key to learning.  Without a student's attention, it is impossible for them to learn.

Games, in particular, are noted not only for their ability to attract attention but to hold attention, often for very long periods of time.  That the player's attention does not waver despite the difficulty of the challenge or the fact that players often fail, makes this apparent superpower that games have over other media even more extraordinary.

Psychologists have a name for this phenomena -- Flow.  First described by Mihaly Csikszentmihalyi, a professor of psychology at Claremont Graduate University, he defined flow as "being completely involved in an activity for its own sake. The ego falls away. Time flies...Your whole being is involved, and you're using your skills to the utmost."

Flow was first linked to games in 2000 and the concept has gained widespread popularity among game designers since then.  Jenova Chen, a game designer who actually made a game called Flow, describes the relationship between games and this ultimate psychological experience as something which has evolved over time:
"As the result of more than three decades of commercial competition, most of today’s video games deliberately include and leverage...Flow. They deliver instantaneous, accessible sensory feedback and offer clear goals the player accomplishes through the mastery of specific gameplay skills."
Flow derives from a balance of challenge and ability.  Too little challenge and the game (or other situation) is boring.  Too much challenge and the game or other situation) creates anxiety.  The chart to the right (taken from a 2007 article by Chen) graphically shows this relationship and how game designers seek to use this knowledge to design a better game.

Certainly other activities besides gaming routinely create a Flow-like learning experience.  Bailey White, an author and first grade teacher, claims the story of the Titanic can create much the same effect in the minds of her students:
"When children get the idea that written words can tell them something horrible, then half the battle of teaching reading is won.  
And that's when I turn to the Titanic.  The children sit on the rug at my feet, and I tell them the story.  It's almost scary to have the absolute, complete attention of that many young minds...
(The book the children use) is written on the fourth grade reading level - lots of hard words - so I tipped in pages with the story rewritten on an easier reading level.  But by the end of the second week the children are clawing up my pages to get at the original text underneath."
It is, however, gaming's ability to create this experience at large scales, for an extended period of time and (even) across generations that has created what I have come to call the "Magic Formula" of game-based learning:  Game = flow (or more commonly, "fun") = increased attention = increased learning.

If you look across much of the academic literature on game-based learning (and in virtually all of the popular literature on the subject), you will likely find some variant on this magic formula.  Moreover, given everything I have written so far, this formula seems to make a certain amount of sense.

But it is wrong.

It is missing an important element, one that everyone recognizes just as soon as I mention it but one that very few people include in any discussion of game-based learning.  This missing element goes back to the very definition of "game".

You need to look no farther than Wikipedia to determine that (much like the word "intelligence"...) there is still a good bit of debate as to what defines a game.  So you don't have to click, here is a sample:
"A game is a system in which players engage in an artificial conflict, defined by rules, that results in a quantifiable outcome." (Katie Salen and Eric Zimmerman)
"A game is an activity among two or more independent decision-makers seeking to achieve their objectives in some limiting context." (Clark C. Abt)
"A game is a form of play with goals and structure." (Kevin J. Maroney)
My favorite definition, however, is by philosopher Bernard Suits and comes from his 1978 book, The Grasshopper:  Games, Life and Utopia.  According to Suits, a game is a "voluntary attempt to overcome unnecessary obstacles."  While undoubtedly glib, Suits has a point.  Jane McGonigal, who has been mentioned previously in this series, points to the game of golf as  a perfect example of this definition in action.  If the true intent of the game were to merely get the ball in the hole, there are many easier ways of doing so besides making people hit the ball with a stick.  As if this weren't hard enough, we actually strive to make the game harder by adding unnecessary obstacles such as sand traps and water hazards.  Surely it would be easier to simply walk over and drop the ball in!

The most important word in this definition and the missing component to the Magic Formula of game-based learning is, for me, "voluntary".  We volunteer to play a game and because we volunteer, we have an expectation that it will be enjoyable from the outset.

Expectations are powerful things.  We know, for example, that the subjective experience of pain can be manipulated simply by changing the expectations regarding that pain.  We also know that teacher expectations about an individual's ability to learn can drastically alter learning outcomes.

You can test this yourself.  Imagine being forced to play a game you know you hate.  How much attention are you paying to the game?  How much learning do you think you might do if that game were associated with an instructional objective?  Ian Schreiber, game designer and professor at Columbus State Community College, has a wonderful term for this kind of learning experience -  "Chocolate covered broccoli".

In short, games don't work because they capture attention; games work as teaching tools because they are voluntary activities that capture attention.

The good news is that "voluntary" is an analog condition not a binary one.  In other words, voluntary is not something that either exists or doesn't but, in fact, has degrees.  People will love certain games, hate certain games but, in general, will have a wide range of responses to the games they choose to (or have to) play.

I have seen this repeatedly in my own classes.  Every student inevitably has a favorite game and, equally inevitably, it is the lesson associated with that game that they most clearly remember.  Dealing effectivly with this problem leads directly to -- 

Myth #3:  I need a game that teaches...

Tuesday, July 17, 2012

Myth #1: Game-based Learning Is New (The 5 Myths Of Game-based Learning)

Part 1:  Introduction

You would be hard pressed to find an explicit reference to game-based learning anywhere prior to 2000.  Google Trends (see chart to the right) only begins to register the term in the news in mid-2009.

Since 2009, however, game-based learning has started to crop up everywhere.  Mentions of game-based learning in academic literature have risen an average of 18% per year since 2008 and the New Media Consortium's 2012 Horizon Report on tech trends in higher education states that, within 2-3 years:
"...we will begin to see widespread adoptions of two technologies that are experiencing growing interest within higher education: game-based learning and learning analytics. Educational gaming brings an increasingly credible promise to make learning experiences more engaging for students, while at the same time improving important skills, such as collaboration, creativity, and critical thinking..." 
It certainly seems new so why do I call this a myth?

Game-based learning, whether you call it that or not, has been with all of us (and with the intelligence community in particular) for quite some time.  In the first place, there is hardly a teacher alive or dead who has not used/did not use a game in the classroom to help teach.  Remember playing Monopoly to learn about money?

If one can see the parallels between Sun Tzu's admonition 2500 years ago to "know the enemy and know yourself" and modern notions of intelligence and operations, then I think it is possible to argue that the first game with intelligence implications is the ancient Chinese game of Go.  In fact, Chinese strategic thinking is probably still being influenced by Go.

It is possible to argue the same about Chess, and Benjamin Franklin actually made this case (indirectly) in his famous essay on Chess:
"...Life is a kind of Chess, in which we have often points to gain, and competitors or adversaries to contend with, and in which there is a vast variety of good and ill events...  By playing at Chess then, we may learn: 1st, Foresight... 2nd, Circumspection (and) 3rd, Caution..."
What good intelligence professional would not want to have better foresight, be a bit more circumspect and exercise appropriate levels of caution?

http://en.wikipedia.org/wiki/Tafl_games
My favorite example along these lines, however, is the ancient Norse game of Hnefatafl.  It is an extraordinary game (See image to the left).  In the first place, it is asymmetric.  This means that the two sides are not evenly matched and, in fact, have entirely different victory objectives.  One player is typically (there are a number of versions of the game) surrounded and outnumbered by about 2-1.  This player's goal is merely to escape the board (not with everyone - just the "king" needs to escape).  The other player's goal is to capture the king.  It is interesting to speculate what young viking warriors were implicitly learning as they played these games night after night...

Learning through games for intelligence professionals took a massive leap forward in the 1800's.  While Clausewitz recognized that war was a game "both objectively and subjectively", it was left to another German, Baron Georg Leopold Von Reisswitz, to take the game, so to speak, to the next level -- Kriegspiel.

Kriegspiel, literally "war game" in German, was invented by Von Reisswitz in 1812 and modified and improved by his son.  It was not, however, until Helmuth Von Moltke became Chief of the Prussian General Staff in the 1850's that the game began to be used seriously as a training aid for officers.  It is noteworthy that one of the most influential books on Kriegspiel was written by Von Moltke's staff officer for intelligence, Julius von Verdy du Vernois.

Based on Prussian success with wargaming, many militaries adopted the system or made up their own.  Today, all militaries use war games of one sort or another (though they are often referred to as "conflict simulations") and they have grown beyond traditional force-on-force simulations and now include political, economic and unconventional warfare factors as well (my thesis when I was in the army, for example, was based on a political game I had designed).

Paper-pencil war games even had a brief surge of commercial popularity in the 1980's.  Today the industry is much reduced from its heyday but it is still possible to find lots of people playing these type games at events like Origins and Historicon and talking about them at sites like Board Game Geek and Consim World News.

No, game-based learning is not new and certainly not new to the intelligence community.  What is new, however, is the advent of the video game.

By any measure, video game sales have skyrocketed since the early 90's (see chart at right).  Not only is revenue largely up since the end of the recession but the market for electronic games has drastically expanded.   Anita Frazier, analyst for the NPD Group, which, among other things examines the gaming industry in detail, outlines some of these new trends in the video below:



Jane McGonigal, game designer and researcher, claims that nearly half a billion people worldwide spend approximately 3 billion hours per week playing online games.  Anyone with a teenager knows that they game a lot but few people know that one of the fastest growing segment of gamers is actually older women.  So called "casual games", like Farmville and Words With Friends, as well as smart phone enabled games, such as Angry Birds, have taken gaming out of the basement and put it at the front and center of popular culture.

The goal, then, has become to tap into this rapidly growing medium for educational or "serious" purposes; to augment the entertainment experience with a learning experience - and this is precisely where we find the second myth. 

Next:  Myth #2:  Games Work Because They Capture Attention

Monday, July 16, 2012

The 5 Myths Of Game-based Learning (A Report From The Classroom)

Let me start this series of posts by saying - unequivocally - I am a strong advocate of game-based learning.  It has worked for me personally, I have seen it work in the classroom and have read the research that, in general, suggests that game-based approaches can provide powerful new ways to learn.

But...

As someone who has spent the last three years applying at least some of the theory of game-based learning in the classroom, I can tell you that it is...well...tricky.

Don't get me wrong.  My intent is not to lead you on and then ultimately come to the conclusion that it can't be done or that it doesn't work or, even, that it is hard to do.  It is just trickier than I expected due, I think, to the "myths" that have sprung up about games and learning.  My hope is that this series of posts will help other teachers (particularly other university professors teaching intelligence studies...) to have a more realistic view of both the difficulties and the rewards of incorporating games into their classes.

Where did these myths come from?  I believe that they are a natural consequence of the inevitable distance between theory and practice.  Any practitioner will tell you that theory only works well...in theory.  Actually applying a pedagogical approach to a real world classroom with real world constraints and challenges is another thing entirely.

The broader conversation on game-based learning largely reflects this divide.  At one end of the spectrum there are the big picture thinkers, the evangelists, if you will, like Jane McGonigal.  McGonigal, a researcher and game designer (and one of my personal favorite experts on games and gaming), makes a strong case for games and game-based learning in her book, Reality is Broken.  If you don't have time to read her book, I highly recommend McGonigal's 2010 TED talk:



At the other end of the spectrum are the things that have actually been tried in class and have been shown to work at meaningful scales.  Here the pragmatists rule and the best statement of that position I have heard comes from Assistant Deputy Secretary of Education, James Shelton, at last year's Games For Change Conference (Shelton's comments begin around minute 6 and some key takeaways are at minute 11 and 13):



Games for Change Festival 2011: James Shelton, U.S. Department of Education from Games for Change on Vimeo.


(Note:  While education has been kicked around like a political soccer ball for what seems like forever, Shelton's entire speech and comments are worth listening to by anyone interested in solving the difficult problem of innovation in education.  You get the sense that this is a guy in the trenches, who understands the reality of the problem, has no political axe to grind and is willing to listen to anyone who has a good idea that can work on a large scale.)

Shelton's speech was not much discussed during or after the conference but it is, for me, a good representation of the practitioner's plea:  "I'll try anything; just show me that it really works."

In the gap between these two extremes, between the heady optimism of McGonigal and the blunt practicality of Shelton, live the 5 myths I intend to talk about in this series of posts. 

Next:  Myth #1 -- Game-based Learning Is New

Monday, July 2, 2012

Top 5 Things Only Spies Used To Do (But Everyone Does Now)

There has been a good bit of recent evidence that the gap between what spies do and what we all do is narrowing -- and the spies are clearly worried about it.

GEN David Petraeus, Director of the CIA, started the most recent round of hand-wringing back in March when he gave a speech at the In-Q-Tel CEO Summit:

"First, given the digital transparency I just mentioned, we have to rethink our notions of identity and secrecy...We must, for example, figure out how to protect the identity of our officers who increasingly have a digital footprint from birth, given that proud parents document the arrival and growth of their future CIA officer in all forms of social media that the world can access for decades to come."
Richard Fadden, the Director of the Canadian Security Intelligence Service (CSIS), added his own thoughts in a speech only recently made public:
"In today's information universe of WikiLeaks, the Internet and social media, there are fewer and fewer meaningful secrets for the James Bonds of the world to steal," Fadden told a conference of the Canadian Association of Professional Intelligence Analysts in November 2011. "Suddenly the ability to make sense of information is as valued a skill as collecting it."
Next I ran across a speech given by Robert Grenier, a former case officer, chief of station and 27 year veteran of the clandestine service, given at a conference at the University of Delaware.  In it, he describes the moment he realized that the paradigm was shifting (and not in his favor):
"Grenier said he came to realize the practice of espionage would have to change when he received a standard form letter at a hotel overseas, while undercover, thanking him for visiting again.  When he realized electronic records now tracked where he had been for certain date ranges, he said he knew the practice of espionage was going to have to change.  “It was like the future in a flash that opened up before my eyes,” Grenier said."
(Note:  While I could not embed the video here, the entire one hour speech is well worth watching.  The part of particular relevance to this post begins around minute 8 in the video.   This is, by the way, fantastic stuff for use in an intelligence studies class).

Finally  (and what really got me thinking), one of my students made an off-handed comment regarding his own security practices.  I needed to send him a large attachment and I asked for his Gmail account. In response, he gave me his "good" address, explaining that he only used his other Gmail address as a "spam account", i.e. when he had to give a valid email address to a website he suspected was going to fill his in-box with spam.

That's when it hit me.  Not only is it getting harder to be a traditional spy, it is getting easier (far easier) to do the kinds of things that only spies used to do.  The gap is clearly closing from both ends.

With all this exposition in mind, here is my list of the Top 5 Things Only Spies Used To Do (But Everyone Does Now) -- Don't hesitate to leave your own additions in the comments:

#5 -- Have a cover story.  That is precisely what my student was doing with his spam account.  In fact, most people I know have multiple email accounts for various aspects of their lives.  This is just the beginning, though.  How many of us use different social media platforms for different purposes?  Take a look at someone you are friends with on Facebook and are connected to on LinkedIn and I'll bet you can spot all the essential elements of a cover story.  Need more proof?  Watch the video below:


The only reason we think this ad is funny is because we intuitively understand the idea of "cover" and we understand the consequences of having that cover blown.

#4 -- Shake a tail.   It used to be that spies had to be in their Aston Martins running from burly East Germans to qualify as someone in the process of "shaking a tail."  Today we are mostly busy running from government and corporate algorithms that are trying to understand our every action and divine our every need, but the concept is the same.  Whether you are doing simple stuff like using a search engine like DuckDuckGo that doesn't track you or engaging "porn mode" on your Firefox or Chrome browser, or more sophisticated stuff like enabling the popular cookie manager, NoScript, or even more sophisticated stuff like using Tor or some other proxy server service to mask your internet habits, we are using increasingly sophisticated tools to help us navigate the internet without being followed.

#3 -- Use passwords and encrypt data.  Did you buy anything over the internet in the last week or so?  Chances are good you used a password and encrypted your data (or, if you didn't, don't be surprised when you wind up buying a dining room set for someone in Minsk).  Passwords used to be reserved for sturdy doors in dingy alleyways, for safe houses or for entering friendly lines.  Now they are so common that we need password management software to keep up with them all.  Need more examples? Ever use an HTTPS site?  Your business make you use a Virtual Private Network?  The list is endless.

#2 -- Have an agent network.  Sure, that's not what we call them, but that is what they are:  LinkedIn, Yelp, Foursquare and the best agent network of all -- Twitter.  An agent network is a group of humans who we have vetted and recruited to help us get the information we want.   How is that truly different from making a connection on LinkedIn or following someone on Twitter?  We "target" (identify people who might be useful to us in some way), "vet" their credentials (look at their profiles, websites, Google them), "recruit" them (Easy-peasy!  Just hit "follow"...), and then, once the trust relationship has been established, "task" them as assets ("Please RT!" or "Can you introduce me?" or "Contact me via DM").  Feel like a spy now (or just a little bit dirtier)?

#1 -- Use satellites.  Back in 2000, I went to work at the US Embassy in The Hague.  I worked on a daily basis with the prosecutors at the International Criminal Tribunal For the Former Yugoslavia.  That collaboration, while not always easy, bore results like the ones that led US Judge Patricia Wald to say, "I found most astounding in the Srebrenica case the satellite aerial image photography furnished by the U.S. military intelligence  (Ed. Note:  See example) which pinpointed to the minute movements on the ground of men and transports in remote Eastern Bosnian locations. These photographs not only assisted the prosecution in locating the mass grave sites over hundreds of miles of terrain, they were also introduced to validate its witnesses’ accounts of where thousands of civilians were detained and eventually killed."  It is hard to believe that only 12 years ago this was state of the art stuff.

Today, from Google Earth to the Satellite Sentinel Project, overhead imagery combined with hyper-detailed maps are everywhere.  And that is just the start.  We use satellites to make our phone calls, to get our television, and to guide our cars, boats and trucks.  We use satellites to track our progress when we work out and to track our packages in transit.  Most of us carry capabilities in our cell phones, enabled by satellites, that were not even dreamed of by the most sophisticated of international spies a mere decade ago.

-----------------------

If this is today, what will the future bring?  Will we all be writing our own versions of Stuxnet and Flame?  Or, more likely, will we be using drones to scout the perfect campsite?  Feel free to speculate in the comments!