Tuesday, February 7, 2012

Intelligence And Crowdmapping

I realized today that, while I had written in the past about the idea of crowdmapping, I had never actually used that term in a post before.

That was a mistake.

Don't get me wrong, Group editable maps have been around for some time and are quite successful.  We have used CommunityWalk, for example in a number of projects and it has served its purpose excellently.

CommunityWalk Map - North Caucasus Violence Sep-06 to Nov-06



Likewise, automatically edited maps are also quite helpful.  The comprehensive map at RSOE EDIS, for example, just recently got some new competition with Google Public Alerts.


Crowdmapping, though, is something a bit different.  Here, dozens and sometimes hundreds of people are providing information from a variety of sources (including the web, of course, but also through SMS and Twitter) that are then mapped in real time.

Right now, this space is occupied almost exclusively by Crowdmap.com, an offshoot of the much admired Ushahidi project.  It is not too hard to see a time, however, when other companies and organizations will enter this space with competing offerings.

I, along with a small group of intrepid students, have been experimenting with this system for a few months and, while managing the input has proven to be more challenging than expected, the potential (and the relative sophistication of Crowdmap) is enormous.

The best way to get a sense of the value of a crowdmap, however, is to look at them.  Below are three of my favorites:  Syria Tracker (a map tracking eyewitness accounts of missing, killed or arrested people in Syria in English and Arabic), China Strikes (a map tracking instances of labor unrest in China), and Energy Shortage (a map tracking reports of energy related issues worldwide).  You can see all of these maps below (Syria Tracker is live; the other two maps need to be clicked on to get to the live versions).





Thursday, February 2, 2012

This Is How The Daleks Got Their Start... (Drone Swarms!)

Some frighteningly clever people at Penn's GRASP Lab have developed software that allows harmless children's toys to swarm...

Over a million people have already seen the video.  If you aren't one of them, take a look below:



Oh, and if you are unfamiliar with the Dalek reference, you can find them on Wikipedia or here.  Or just watch the vid below:




If you want a more serious look at the potential impacts of this technology, take a look at John Robb's recent post (via @nof)

Thursday, January 26, 2012

Mercyhurst College Is Now Mercyhurst University (WOOT!)

For the last several years Mercyhurst has been going through the administrative process of changing our status from a "College" to a "University".  Yesterday, we received notice from the State of Pennsylvania that our application had been approved
(Note:  I am going to talk about what I think this means and some of the new things that are likely to emerge as a result but I am also interested in what you have to say. so please leave a comment if you have one!)
This process actually began even before the first piece of paperwork went in.  We changed our Carnegie Classification a number of years ago to better represent the courses, degree programs and research opportunities we offered.

Now that the status issue is resolved, we are all dealing with the inevitable administrative details that result from  such a change, the first of which is our name.  The Mercyhurst College Institute for Intelligence Studies (MCIIS) is likely to change to the overwhelmingly favored Institute for Intelligence Studies at Mercyhurst University (IIS-MU) at some point.  The institute's website (mciis.org) will likely get an upgrade at about the same time.

More interesting than the administrative details are some of our longer range initiatives.  The University (going to take me awhile to get used to writing that...) intends to pursue doctorate programs in some of our majors, for example.  Our hope is to be one of the first.  I don't think we want to pursue a PhD type program, however, as there are already a number of PhD programs that would allow a student to focus on the academic aspects of studying intelligence.

Rather, I think we should, like MDs and JDs, offer a professional doctorate (DI? ID? InD?).  The focus of  such a degree, if I had my way, would be on application -- the actual doing of intelligence analysis -- rather than just talking about it.  Specifically, I would like us to provide our doctoral students a chance to become better leaders and managers in addition to gaining increased skills as analysts.

We are obviously some distance from this objective but the conversation about the future direction of the intelligence studies program is starting.  Now is the time to chime in!

Monday, January 9, 2012

What Makes An Easy Question Easy? (DAGGRE.org)

For most of last year I have had the privilege of working with the DAGGRE Team on the Intelligence Advanced Research Projects Activity's (IARPA's) Aggregative Contingent Estimation (ACE) Project.  While all the real scientists have been busy exploring research questions involving Bayesian networks and combinatorial markets, this old soldier has been focusing on more mundane things like "What makes an easy question easy?"

(Note:  If you have not had a chance to check out the DAGGRE.org site and its mind-numbingly cool companion blog, Future Perfect, you should.  Three reasons:  First, it is pretty interesting research that could impact the future of the intel community.  Second, you can actually participate in it.  Third (and maybe most important to many of the readers of this blog),  the DAGGRE team has gone the extra mile to make sure your personal data, etc is secure while participating (check out the FAQ page for all the details)).
As I explained in this post, having some way to evaluate and even rank intelligence reqyuirements according to difficulty is important.  Analysts are supposed to be accurate but if you aren't also evaluating the difficulty of the underlying question, two equally accurate analysts could be miles apart in terms of overall quality.  It would be kind of like saying a little leaguer who hits .400 is as good as a major leaguer hitting .400.

While that distinction is easy to see in baseball, it is much more difficult in intelligence analysis.  Questions come in all shapes and sizes and vary in an enormous number of ways.  There is also a psychological, subjective aspect to it:  Questions that seem tough to some analysts may seem very easy to others.  On its face, it appears difficult if not impossible to come up with a system that can reliably evaluate and categorize questions by difficulty level.

Which is why I want to try.

And I may be making some progress.  I think I have figured out how to spot an "easy" question.  DAGGRE, you see, is a predictive market.  This means that people assign probabilities to the outcomes of questions.  Imagine, for example, I asked if Sarkozy would still be the president of France on 1 JUN 2012 (He is running for re-election in April and May).  Now imagine that you thought the odds of Sarkozy's re-election were 80%.  You could establish your position in the market at that "price" and others would be free to do the same (The Iowa Electronic Markets do this for the US election, by the way).

The market would reward people that were right on 1 JUN and would heavily reward those that were right when lots of others were wrong.  Studies have shown that, on average, these types of markets are pretty good at making these kinds of estimates.

Now, imagine if I asked you to estimate the chances that Sarkozy would still be president of France by 1700 tomorrow?  Sarkozy is not sick (at least I hope he is not) and there are no direct, immediate threats to his presidency.  There is no reason to expect that he would not still be president tomorrow. Likewise, successfully predicting that he will still be in office tomorrow is no sign of great analytic ability.  The question is too easy.

Generalizing this pattern, I think it is worth exploring the idea that "easy" questions are those that start and end their run on a predictive market close to either 0% or 100% probability, do not vary much during the course of that run and, finally, resolve in accordance with their probabilities (i.e. they happen if close to 100% and don't happen if close to 0%.  See the picture that accompanies this post for an idea of what such patterns might look like).  Furthermore, I think that these kinds of questions will see much less trading activity than other ("not easy") questions.

Of course, the problem with this definition is that it only identifies easy questions after the fact, after the question has been resolved.  My hope, however, is that by examining the set of questions that we already know are easy (at least under this definition), that we might be able to see other patterns that will allow us to identify easy questions when they are asked rather than only after they are answered.

Our (I say "our" because I am working on this with one of our superb grad students, Brian Manning) first attempt to get at these patterns will be a simple one -- question length.  We hypothesize that, on average, questions that match the "easy" pattern I described above will be shorter than other questions.  When you think about it, it makes some sense.  After all, "What time is it?" seems like an easier question to answer than "What time is it in Nigeria?" 

Brian has found some research that says that, subjectively, people don't perceive longer questions as necessarily more difficult.  The difference, of course, is that we have a definition of "easy" that is based on objective criteria.  Still, I think it best to start with the easiest possible measurement and then go from there.  Not sure where I will end up or if this will be a dead end but I will keep you posted...

Wednesday, December 14, 2011

Is ISI Really The Best Intelligence Agency In The World?

http://nationalpostnews.files.wordpress.com/2011/11/spies.gif
According to the National Post, Canada's conservative newspaper, it is.

That is just one of the interesting tidbits reported in this graphic titled, The State of The Global Spy Game (Download the PDF here).

Following Pakistan's ISI comes Mossad in the number 2 slot, MI 6 taking third and the CIA following up in fourth place.

In addition to the Top Ten list, most of the graphic outlines a series of assassinations, explosions, spying, cyber spying and "convenient accidents" that the Post ties to various intelligence organizations over the last ten years.

Finally, there are some charts which claim to be based on some of Richards Heuer's work regarding the demographics of spies; where they come from in government, how much education they have, etc.  The graphic provides no comparative data to see if any of the categories identified are larger or smaller than they are in the relevant population from which they are drawn so it is difficult to draw conclusions but intriguing nonetheless.

Given the nature of the article and difficulty associated with making these kinds of judgements, I am not surprised at the results but it is still an interesting question to ask:  Who has the world's best intel service? 

(Hat tip to Christophe Deschamps at Outils Froid and his must follow Twitter feed!)