Friday, December 4, 2015

The Umbrella Man: A Must-See Cautionary Tale About The Inherent Unlikelihood of Conspiracy

This is not to say that there are no conspiracies, but only to say that analysts should be cautious about leaping to that kind of conclusion at the outset. (If you can't see the video, click on this link to view on the NY Times page)

Tuesday, September 8, 2015

Fermi Questions: Creating Intelligence Without Collection

Collection is, for many, a fundamental part of and, in extreme cases, the essential purpose of, intelligence.  What would we be without all our drones and spies and sensors?

What if I told you that you can do intelligence without any collection at all?

You probably wouldn't believe me ... but ... you'd likely admit that the advantages would be substantial.  It would be blazingly fast - no waiting around for satellites to come into position or agents to report back.  It would be mindnumbingly safe - virtually no footprint, no assets to risk, no burn notices to issue.  It could reduce as much as 90% of the uncertainty in any given intelligence problem at essentially zero cost.

What is this prodigious procedure, this miracle methodology, this aspirational apex of analytic acumen?

Fermi questions.

Enrico Fermi was a mid-twentieth century physicist who created the first nuclear reactor.  He also taught physics at the University of Chicago.  He liked to ask his students questions like, "How many piano tuners are there in Chicago?"  

In the pre-internet days, this kind of question required a tedious trip through the phone book to determine the number.  Even today, using brute force to answer this question is not a trivial exercise.  Students almost always balked at the work involved.

Fermi's approach, however, was different.  He wasn't asking, "What is the most direct route to the answer to this problem?"  Instead he asked a slightly different and, for intelligence purposes, vastly more useful, question: "How close can I get to the answer with what I already know?"

So.  What did Fermi already know?  Well, the population of Chicago is about 3 million and from this he could immediately devise that there could be no more than 3 million piano tuners and that the minimum was none.  That may not sound particularly useful but just recognizing these facts already limits the problem in useful ways and points the way towards how to make the estimate better.

We know, for example, that the number of piano tuners has to be driven by the number of pianos in Chicago.  How many of those 3 million people have pianos?  Here we could tap into our own experience.  How many people do you know?  How many of them have pianos in their houses?

Some will say 1 in 10.  Some might say 1 in 100.  Even this wide range is very useful.  Not only does it narrow the problem significantly but also it highlights one way in which we could get a better estimate if we absolutely have to (i.e get a more exact number of people with pianos in their houses).  But we want to do this without collection so let's carry on!

With the average household being a shade under 4 people, we can estimate that there are about 750,000 households in Chicago.  We can further refine that to between 75,000 and 7500 pianos (depending on whether you thought 1 in 10 households had a piano or 1 in 100).

Oh, I know what you  are thinking!  What about all the non-household pianos - at schools and such - that you are conveniently leaving out.  I would say that my high end estimate of the number of pianos includes them and my low end estimate does not so they are in there somewhere.  It is a "good enough" answer for right now for me.  For you that might not be the case, however, so you can make your own estimates about what these numbers might be and put them into the mix.

Working about 250 days a year (weekends, vacation and holidays excluded) on about 2 pianos a day means that Chicago needs between 150 and 15 piano tuners.  

How many piano tuners are there really in Chicago?  Wolfram Alpha is one of the best search engines to use to answer these kinds of questions.  It permits users to ask natural language questions and then dips deeply into public databases to extract precise answers.  When asked, "How many piano tuners are there in Chicago?" this is what you get:


Note that Wolfram gives us the number of all musical instrument repairers and tuners - 290 as of 2009.  Certainly not all of them are piano tuners.  In fact, once you consider just how many instruments need to be professionally tuned besides pianos and you subtract the number of repairers of all kinds of instruments that do not tune pianos, you are lucky to have a third of these musical instrument repairers and tuners who actually can tune a piano.

More importantly a third of 290 falls comfortably within the 15-150 limits derived from our Fermi process.

Without leaving our chairs.

Intelligence without collection.

What if relying on Fermi questions results in really wrong answers?  First, I could say the same thing about any intelligence methodology.  Very few of them have been tested to see if they actually improve forecasting accuracy and all of them take time and resources to implement.  All of them can be wrong.  Here, at least, both the logic chain and the path to improving the estimate is obvious.

Second, I would ask, what level of precision do you actually need?  Norm Augustine, former CEO of Lockheed Martin used to say, "The last 10 percent of performance generates one-third of the cost and two-thirds of the problems."  Augustine was talking about airplanes but he could have just as well been speaking of intelligence analysis.  Getting ever more narrow estimates costs time and money.  Good enough is often - in fact, surprisingly often - good enough.


Third, it is unlikely to give you really wrong answers - say one or two orders of magnitude off.  This is one of the best benefits of going through the Fermi process.  It allows you to have a good sense of the range in which the right answer will likely fall.   For example, if, before you had done a Fermi analysis, someone came up to you and said that there are 100,000 piano tuners in Chicago, you might not question it.  A Fermi analysis, however, suggests that either something is really wrong with your logic or, more likely, that the person does not know what they are talking about.  Either way, the red flag is up and that might be just enough to prevent a disastrous mistake.

You can easily try this method yourself.  Pick a country that you know little about and try to estimate the size of its military based on just a few easily found facts such as population and GDP.  Once you have gone through the process, check your answer with an authoritative source such as Janes - oh! - and please do not hesitate to post your results in the comments!

By the way, I routinely use this method to get students to answer all sorts of interesting and seemingly intractable problems like the number of foreign government spies working within the US Intelligence Community.  The answer we get is usually right around 100 which always seems to surprise them.

Finally, if you are interested in integrating Fermi Problems into your tradecraft, there are lots of good resources available.  One of the best has been put together by the Science Olympiad, which actually holds a Fermi Problem competition each year.

Friday, August 7, 2015

Cheap, Re-usable Cell Phone Microscope? Yeah, It's A Thing...

For the last couple of years I have been exploring the idea of intelligence support to entrepreneurs.  The cool thing about this is that I get exposed to lots of new ideas.  The most recent - and one of the most interesting - products I have seen is the Button Microscope.

This is a microscope that you can attach to the lens of any cell phone with a camera.  It immediately turns it into a powerful microscope.  To be honest, others have done much the same thing but their products tend to be clunky, DIY projects that require far more patience than I have for that sort of thing.

The Button Microscope just works.  More importantly, it is going to be pretty inexpensive to produce and re-usable as well.  

I can't show you the prototypes I have been playing around with this morning (top secret, hush-hush stuff, you know) but I can show you some of the pics I took with them (with zero training I should add).


This first pic is one of a piece of graph paper I had lying around.  I edited both the left and the right image for size and brightness in the online photo editor, PicMonkey, but other than that both images are straight from my cell phone.


You can get a little bit better feel for the power of the microscope in this image.  On the left is the venerable Intelligence Analyst's Deck Of Cards (still available for sale...ahem...).  On the right is a close up of the box (focused on the "L" in "Analyst's").  You can see that the microscope has a distinct focal point and that the image blurs some at the margins.  That may be an existential feature of the device or it may just be that I am a pretty poor photographer.  I'll need to play around with it some more to see.


To me, this is the most impressive image set.  On the left you see one of the playing pieces from my game, Cthulhu Vs. Vikings.  On the right you see an image taken using the Button Microscope from the top down.  These pieces were all printed on the 3D printer and the macro view allows you to see every layer quite clearly and captures a surprising amount of detail even as the playing piece recedes from the focal point.

The broader intel/investigative implications of a device like this are pretty interesting to contemplate.  Clandestine collectors who are looking to get extreme closeups of, I don't know, circuit boards and such will love it.  Investigators look for trace evidence or fingerprints are going to love it too (If you have a clever idea for something like this, drop it in the comments!).

When can you get one of these amazing devices for your own cell phone?  Well, we hope to launch a Kickstarter campaign in October to fund the initial production run.  

Next we will add a mass spectrometer (currently available for $249 - no shit) and we will be well on our way to a tricorder.  Oh, wait.  That's due in January.

Tuesday, July 14, 2015

How Did I Miss This? YouTube Now Has A 360 Degree Video(!) From North Korea(!)

You read that title right, sports fans!  360 degree videos.  As in you can now decide where you want to look, left, right, up or down in a video.  Take a look at this recent video shot by a couple of guys visiting North Korea...


How does it work?  Incredibly simply!  Just click the arrows in the circle in the upper right hand corner of the video image.  Take a look at the annotated screenshot to the right if I am not being clear enough.

Right now it appears to only work on Chrome or Android devices and I found that other videos (and there are a growing number of them) often had to pause to buffer.

The North Korean video was shot with an Etaniya camera and some specialized software.  Apparently YouTube (via Google) is working with the software and the hardware manufacturers to make it easier.

In the interim, there are some guides starting to be produced to help you get going making your own 360 degree video.

(H/T to WK!)

Tuesday, June 16, 2015

Why The Most Important Question In Game-based Learning Is "Who Will Fund The Game Genome Project?" (Part 2 of 2)


What's Missing From This Picture?

I hate Monopoly.  If there was a time when I liked Monopoly, I can't remember it.  Even today, when I dream of hell it features an endless game of Monopoly played with Hitler, Stalin and Pol Pot (Don't ask...).

What if game C in the image above (and also featured prominently in Part 1 of this series) is Monopoly?

All the cool databasing and meeting and organizing in the world aren't going to help me learn if I absolutely hate the game that is supposed to teach me.  Resolving this problem is tricky and it starts with the question, "What is a game?"


I am a big fan of Bernard Suits definition of a game: "Games are a voluntary attempt to overcome unnecessary obstacles."  
Note:  At this stage, it is typically obligatory to write a lengthy discussion about all the other definitions of "game" and how Suits succeeds in part and fails in part...blah, blah, blah.  You can find this sort of stuff anywhere - just Google it.  So, in the interest of time, let's just pretend I have already written this essay (OK, OK, "brilliant essay", if you insist).  Now we can get to the point.  
The key thing that the Suits' definition adds to the discussion of games in the context of learning is that games are voluntary.  Think about it.  If, at some level, the learner is not motivated to play the game by the game itself, it isn't really a game for that learner (kind of like The Hunger Games aren't really games for Katniss Everdeen...).

What Is The Game Genome Project?

If the missing piece from the picture above is the preference of the learner/player, then the question becomes, "How do we determine those preferences?"  To put it another way, if Rock and Country and Classical were insufficient to define musical preferences, why should we think that Role-playing, Collectible Card or First Person Shooter are good enough to define game preferences?



The truth is, we shouldn't.  The Game Genome Project would seek to do to games what the Music Genome Project did to music - break games down into their component parts, validate the relevance of those parts in determining player preferences and then test that system so that we can reliably predict game preferences across learners/players and genres.

Some of this kind of work is already being done, albeit without the focus on education.  Take a look at BoardGameGeek, for example.  BGG is arguably the web's best resource for tabletop games and its advanced search feature allows users to search by hundreds of categories, subcategories and mechanics as well as by number of players and playing time.  

The tens of thousands of amateurs and professionals who have contributed to BGG over the years have done very good work in crafting all these elements of board games but which of these categories actually matter?  And what about video games?  Do any of these categories and subcategories cross over?  

Yes, there is a lot of work to do but imagine if such a system were fully realized.  Teachers could go to one site, input their students preferences and the teacher's learning objectives and a list of games would pop up.  Even more important, a student, faced with a learning challenge could input his or her preferences and the learning objectives and find a list of games that would make the effort not only fruitful but fun.  

The ability to reliably connect learner/player preferences in games to learning objectives in classes across the full spectrum of tabletop and video games would, in turn, transform game-based learning from the pedagogical technique du jour to a lasting  and important part of the educational landscape.

Who Will Fund The Game Genome Project And Why Is This Question So Important?

If I am right about the importance of the Game Genome Project to the future of game-based learning, then who will fund it?

The first possible source is, of course, private investment.  A Pandora-like game recommendation engine makes about as much business sense as Pandora itself.  Pandora, however, let's you listen to music it thinks you will like and then makes money when you buy it (and ads, of course, but that would be true for any website).  

Since most games take longer than 3 minutes to play (or even to download...), it is unclear to me if this business model would work as well (or at all) for games.  More importantly, private investors are unlikely to want to invest in the hard work of tying learning objectives from all of the various curricula to the games.  It is something that only someone with deep pockets and a financial incentive (like an educational publisher?) might be able to attempt.

Government could do this, of course.  It looks like a good NSF or Dept. of Education grant, perhaps.  The military or intelligence community could certainly do it but would be highly likely to focus almost exclusively on a narrow range of skills and games.

Whoever will do it, it will have to be done. Until we are able to connect game to learning objective and learner to game, game-based learning is likely to remain a niche teaching technique, full of unrealized potential.

Monday, June 15, 2015

Why The Most Important Question In Game-based Learning Is "Who Will Fund The Game Genome Project?" (Part 1 of 2)

Way back in 2000, two researchers, Will Glaser and Tim Westergren, began what was then called The Music Genome Project.  It was designed to categorize music by more than 400 different "genes" or characteristics of the music.  The goal was to build a better music recommendation engine.

Today, this project is better known as Pandora.

Glaser and Westergren's fundamental insight was that breaking music down into broad general categories such as Rock or Pop or Country wasn't very useful when it came to making recommendations.  Some people liked music with male vocalists or heavy beats or a fast tempo and no one liked all of country music or everything produced that was labelled "rock".  

In fact most people liked a little bit of everything.  Sure, they had genre preferences, but that didn't keep the Jethro Tull fanatic from liking (and buying) the occasional Mike Oldfield album (ahem...not that I know anyone who would do such a thing...).

Thus the Music Genome Project was born.  By analyzing the genetic makeup of each song, the Project wasn't just able to better dissect individual pieces of music.  It was actually able to make reliable cross genre recommendations.  Oh, you like this driving, 120 beats per minute, sung by a female vocalist with lots of guitar distortion rock anthem?  Then you might also like this hip-hop track with many of the same musical genes!

What Does This Have To Do With Game-based Learning?

This isn't going to sound that earthshaking but it was to me the first time I realized it:  All games teach.  You can design a game that will explicitly (or implicitly) teach something like math or grammar but you don't have to.  With all of the good games, both video and tabletop, that are out there, it is not difficult to find a game that can be used to teach almost any K-12 and many university level subjects.  

How many classrooms routinely use Monopoly, for example, to help teach basic addition and subtraction or units of currency?  Monopoly certainly wasn't designed with this purpose in mind but it serves that purpose nonetheless.  

While I might be bold in my assertion that every subject is covered, I would argue that, if I am wrong, I am not wrong by much.  This is the golden age of gaming.  There are more games being produced (and more good games) than at any other time in human history.  The selection is already immense and growing.  In fact, it might be more accurate for me to say that, while I might be wrong, I won't be for much longer.

So, to put it more formally, you can connect all games to one or more learning objectives (See image to the left).  I am using the term "learning objective" loosely here.  Your learning objectives may come from a formal document, such as the common core, or from a less formal desire "to teach these darn kids something about X".  

Given the prevalence of formal standards in modern education, however, it is pretty easy to imagine (though infinitely less easy to actually do...) professional educators and gamers sitting down together and dissecting every game for the learning objectives that each game addresses (i.e. the things each game teaches).

Eventually - and, of course, you would start with the most popular games and the most important learning objectives - you would have a database that could answer the question, "What game teaches this?"  Almost certainly, multiple games will cover the same learning objectives and some games will cover more relevant learning objectives than others.  It is conceivable that a teacher would be able to query this database and find a single game (See image below) that adequately addressed all of the learning objectives for a particular block of instruction.


Next:  What's Missing From These Pictures?

Wednesday, June 3, 2015

Intelligence And Vigilantes

(Note: This is the third and final entry in a three part series on some of the things I have learned about intelligence support to entrepreneurs from running a number of crowdfunding campaigns. For Part 1, click here and for Part 2, click here.)


Moros is a comic book series by Josh Lucas.  Loosely based on our hometown, Erie, PA, Moros tells the story of a former soldier turned policeman who becomes a vigilante to rid his town of a drug that he takes himself.

Josh successfully funded his third issue of the comic with a Kickstarter campaign that we helped him run back in April.

Josh was an experienced crowdfunder when I met him.  He had funded his first issue with a successful IndieGoGo campaign and had spent the time since that first issue working on his second issue and learning what he could about the comic book industry.

What he learned and what I have seen first hand with almost all of the entrepreneurs I have worked with (myself included) is that there is a kind of insanity that grips you when you are working on these projects.  It is almost impossible for you to see the world as it is.  Instead, you insist that the world is as you want it to be.  

Most intelligence professionals know this problem better as the Intel-Ops Divide.  The argument goes something like this:  Intel and ops need to be kept separate.  If they aren't, the intel guys run the risk of becoming so enamored with the plans the ops guys come up with that intel starts to see all the evidence not as it is but as ops hopes it will be.  This makes the intel guys useless to the organization.

The problem with entrepreneurs is that they don't typically have enough resources to be able to keep intel and ops separate.  So, what is an entrepreneur to do?  It seems to me that successful entrepreneurs manage this problem by asking dramatically different questions of intelligence professionals than the ones asked by either unsuccessful entrepreneurs or traditional leaders.

There is a growing body of evidence (produced largely by the Darden School of Business at the University of Virginia) that successful entrepreneurs and innovators look at problems in fundamentally different ways from the rest of us.  Specifically, they use "effectual reasoning" (as opposed to causal reasoning) and five specific techniques to help them make decisions:  

  • Bird in Hand.  "What do I have at hand and what can I do with it right now?" are the kinds of questions that emerge from the Bird in Hand Principle.  The kinds of intelligence questions that arise from this principle focus on expanding the entrepreneur's understanding of what resources are immediately available for use.
  • Affordable Loss.  Good entrepreneurs don't focus exclusively on the potential gain.  Instead, they work hard to understand what they can afford to lose at each step.  Helping the entrepreneur understand the full nature of the downside risk is a good task for intel.
  • Lemonade.  This principle is about not only taking advantage of surprises (both good and bad) but welcoming them.  It means that intel support to entrepreneurs has to be very flexible and very fast.
  • Patchwork Quilt.  Good entrepreneurs rarely try to go it alone.  Instead they are constantly looking for partnerships (both formal and informal) with self-selecting stakeholders.  Identifying and prioritizing these potential stakeholders seems a natural fit for intel.

These principles and the associated intel questions that go with them don't ask the intel professional to buy into the underlying goals of the entrepreneur or evaluate the progress towards those goals.  Instead, they set the stage for intel success by asking questions that support the decisionmaking process of the entrepreneur uncomplicated by operational bias.

Friday, May 29, 2015

New Wikipedia Articles Of Interest To Intelligence Professionals

Despite its occasional weaknesses, I really like Wikipedia.  Others (perhaps unnecessarily) worry about an encyclopedia that is editable by anyone.  Whether you like it or not, however, it is undeniably the tertiary source of first resort for most of the planet.  

One of the things that has always bothered me about it, though, is the generally poor coverage of issues related to intelligence.  From intelligence history to intelligence theory, Wikipedia, in my opinion, needs help.

That is why, instead of traditional writing assignments in some of my classes, I like to task students to write Wikipedia articles about intelligence issues that have not already been covered.  

This kind of assignment has a variety of educational benefits.  In addition to adding to the world's body of knowledge, the students have to learn how to use MediaWiki (the same platform that powers Intellipedia and many other wikis in the in the private sector).  

They also have to learn how to write an encyclopedia article complete with Wikipedia's famous "Neutral Point Of View" - a skill that is enormously useful in intel writing as well.  

Finally, they have to expose their work to the varied and critical audience that makes up the ad hoc Wikipedia editorial staff.  This is more important to the learning process than you might think.  Students typically master the skill of gaming their professors pretty quickly.  Writing for an army of discerning, anonymous editors?  Not so much.

So, without further ado, here are a handful of articles recently produced by students in my Collection Operations for Intelligence Analysts class.  The mix is eclectic because I let the students pick their own topics but is, perhaps, more interesting as a result.  

This handful only represents some of the output from last term.  Some of the articles are still in Wikipedia's increasingly lengthy review process.  I will publish those once they become available.

Friday, May 22, 2015

Three Simple Ways To Make Your Next Analytic Team Work Better

We do a lot of intelligence analysis projects at Mercyhurst using teams.  We do this primarily because this is the way intelligence work generally gets done in the real world and we want to replicate those conditions in the classroom.

There are many good books on how to make teams work of course.  My favorite, Hackman's Collaborative Intelligence, is required reading in several of my classes, for example.  In the hundreds of analytic teams I have managed since I came to Mercyhurst, there are three rules, however, that are simple to execute and always just seem to work.

All other things being equal, have one or more women on your team 

People always talk about diversity in teams being a good thing generally and, frankly, I agree with the sentiment.  If you aren't persuaded by the morality of this argument, though, there is another reason that ought to get your attention:  Having one or more women on a team improves team performance (You can see the hard evidence here and an easier to read version here.  The chart below comes form the latter link).  

Anecdotally, I have seen this work many, many times.  Of course, these are averages and all other things are rarely equal but if you have the chance to put a guy on an all guy team or an equally qualified woman, I would pick the woman every time. 
 

You can see the whole article at https://hbr.org/2011/06/defend-your-research-what-makes-a-team-smarter-more-women 


Don't brainstorm!  Use Nominal Group Technique instead  

Traditional brainstorming - you know, where someone gets up and writes ideas on a chalkboard as people shout them out - doesn't work.  Lots of studies have shown this (see the screenshot below) but you likely don't need to read them.  You have probably been in too many of these sessions yourself and understand how inefficient brainstorming is at generating new ideas while avoiding groupthink.  

A better technique is available, however:  Nominal Group Technique (NGT).  

This list of research showing brainstorming failures comes from another interesting alternative - Brainswarming.  For more, see:  https://hbr.org/2014/06/brainswarming-because-brainstorming-doesnt-work

The key to NGT is to pose the problem first and then have people write their ideas down independently of one another.  Only after everyone has written down their ideas should people compare notes.  While comparing notes you are looking for two things.  The first are the ideas that everyone (or almost everyone) generated that are essentially the same.  The fact that the same idea occurred independently multiple times probably means that it is important or at least worth investigating further.  The second thing is what I call a "positive surprise".  Positive surprises are those ideas that only one, maybe two, group members come up with but as soon as they read them out loud everyone acknowledges that they are great ideas.   

During your first team meeting require people to focus on relevant skills instead of their job titles when introducing themselves 

Imagine this.  You are at your first team meeting and people are going around the room introducing themselves.  One says "I am Joe Shmo, the Balkans analyst at the CIA", and the next says, "I am Mary Shmedlap, a counterintel analyst at FBI.".  Pretty common, right?  It is also pretty ineffective.  These kinds of introductions have a tendency to reinforce the divisions within a team.  

Far better is to focus on the skills the individuals bring to the team.  This comes directly from Hackman but is one of the most powerful techniques available.  I have my teams write down any and all skills they have that they think might be relevant to the project.  Expertise in the targeted problem area is important but I also ask team members to write down ancillary skills that might be important such as proofreading or graphic design skills.  

I also ask team members to include skills which might not appear to be directly relevant to the task at hand right now such as calligraphy.  Intel analysis rarely comes with a roadmap and it is often unpredictable at the beginning of the project what skills will turn out to be relevant at the end of the project.  

Finally, I also get them to talk about their personal preferences in terms of workflow - are they the kind of people who like to get everything done early or are they last minute kind of people?  Do they work better at night or are they early birds?  You would be surprised how much a conversation like this, early in the life cycle of a group, smooths things out over the long haul of a project.

That's it!  Three proven techniques for improving team performance backed by research.  Let me know how they work for you!

Wednesday, April 22, 2015

Intelligence And Cookies

(Note: This is entry number 2 in a three part series on some of the things I have learned about intelligence support to entrepreneurs from running a number of crowdfunding campaigns. For Part 1, click here.)

Ah!  Cookies!  Who can resist a good cookie?  Fresh out of the oven, homemade, imprinted with pictures of horses and bunnies and dinosaurs...

What?  

That is the good idea of Lisa Van Riper, the creator of the Tiny Hands On A Roll Kickstarter (closing in a little more than 24 hours). Little kids like to "help" when it comes to baking but kitchen implements are often too large, too unsafe or too uninteresting for little kids to use.  How can you keep them engaged without them getting frustrated?

Lisa hand makes laser engraved, bakery quality rolling pins that are exactly the right size for small children.  They work just like a good rolling pin ought to work but are sized for tiny hands and completed with customizable laser-engraved images that make the rolling fun.

Check out her project page (just click on the image above).  Her images are beautiful, her products demonstrate an over-abundance of quality and care in manufacturing.  Something like this ought to just kill it on Kickstarter, right?

Yep.  Except for one small detail (and my second lesson learned);  Timing.

Every crowdfunding project creator worries about timing.  What is the best day to launch? What is the best time of day to launch?  How long should the campaign be?  When is the best time of month to launch?  When is the best day to end?  What days should I avoid?  

These are all good questions but it is easy to be hyper-focused on these tactical issues and miss the strategic (or, at least, seasonal) trends.

Take a look at the chart below.  It is taken from Google Trends and shows the US search trend for the term "rolling pin" over the last ten years or so.  Talk about strong patterns!  Every peak is in December and every trough is in...ahem...April.  


Hindsight being 20/20, it is obvious why this is so.  Rolling pins are strongly associated with the scratch baking frenzy that begins shortly before the end of October and only ends around the time people are waking up late and cursing the winter sunlight of January 2nd.  In terms of searches for the term "rolling pin" at least, that frenzy is almost three times as strong in the fall as it is in the spring of every year since 2005.

We figured this out before we launched, of course.  Lisa wants to expand her business and she wanted to get this product line out there now and not wait till the fall.  She has already explored other ways to sell the product after the Kickstarter campaign is over and she will almost certainly do well in the fall with these products (when not only baking season but also toy season kicks in).  Our solution was to adjust her expectations - and her goal - accordingly.  

Not every product has this strong of a trend associated with it.  That said, if you have to swim upstream, you at least want to know about it beforehand.

Next:  Intelligence And Vigilantes

Thursday, April 16, 2015

Intelligence And Coffee


It has been said (at least by me) that coffee is to intel as air is to life.  In fact, the Food and Drug Administration has reportedly recognized coffee, along with sugar and alcohol, as part of the three basic food groups of intelligence professionals everywhere (note I said "recognized" not "approved of"...)

So, it is no real surprise that I am beginning what I hope will be a three part series on the intelligence lessons I have learned running various crowdfunding campaigns with Roast Assured, a project that is not just about coffee but about the perfect cup of coffee.

Roast Assured is a client of our Quickstarter Project here at Mercyhurst.  Quickstarter allows us to match aspiring, energetic college students and their skills with entrepreneurs who need those skills to help get their crowdfunding projects off the ground.

I received a $10,000 grant from the good people at Ben Franklin Technology Partners last year to help local entrepreneurs run some campaigns (and recently received a much larger grant to run lots more campaigns over the next three years).  Since then, I have run five campaigns (three of which are live right now) and have spoken to nearly 30 other potential creators.

What have I learned?

Lesson #1:  Entrepreneurs need lots of intelligence support.  In fact, I would go so far as to say that the number one requirement of an entrepreneur is reliable intelligence about the environment in which they are operating.  Most entrepreneurs know their idea inside and out.  They know all about their current operational capabilities and limitations. Everything else is almost always enshrouded in varying degrees of fog.

To a certain extent this should be expected.  Clearly there are levels of expertise when it comes to entrepreneurship.  Most of the people who come to me are raw and untested. Some of the people I do see come to me better informed than others but I don't see many serial entrepreneurs or experienced business people.  The fundamental truth seems to remain, though, entrepreneurs love their ideas and know them quite well.  The rest ... well ... not so much.

Much of this intelligence needs to be tactical, real time support, however.  I call it "just-in-time" intelligence.  Intel support at this level is all about being able to fill in the gaps immediately and with just enough info to keep things moving.  To put it in terms most national security intelligence professionals will understand, with entrepreneurs, all of the alligators are at your ankles and all of the targets are 50 meter ones.

Roast Assured is a good example of this.  Jack Barton, an expert coffee roaster and the creator of Roast Assured, has a great idea.  He wants to work with people to help them get their perfect cup of coffee.  He knows how different roasts and different grinds and even different flavorings and spices work together (or against one another) to change the taste of a cup of coffee.

What he really likes to do, though, is to put that knowledge to work for people - to help them craft their perfect brew.  He also wants to take it a couple of steps further.  First, he wants his customers to be able to name their coffee.  It can have personal significance, it could be the regular coffee in a small town diner or even the official coffee of some internet start-up. He even wants to work with you and his artists to craft a logo for your brand of coffee!


The bottomline is that it is your coffee with your chosen name on it.  Once you and Jack figure out the perfect blend, your named coffee goes into his database and you can go online and order another pound of Spy Roast (or whatever) anytime you want.

Beyond this, it gets tricky.  Who wants to buy this?  Where can we find him or her? How should we price this?  What's our value proposition?  Who will finance us?  Where can we get this made?  Who are our competitors? And on and on and on!

Virtually all the important questions entrepreneurs have are, at their core, questions about things critical to the success or failure of the project that are largely or completely outside the entrepreneur's control - in short, intelligence questions.  

One problem, of course, is that these raw, untested entrepreneurs don't typically have the money to pay for this kind of intel support.  This problem is unlikely to go away.  A second problem is that most of the entrepreneurial literature and many of the entrepreneurship training programs don't expose creators to the kind of intel tools and skills that could be so helpful in getting their projects off the ground. 

Next:  Intelligence And Cookies

Friday, March 27, 2015

What You Should Be Reading! (Blog List)

A few weeks ago, I asked ... well ... everyone: "What are you reading?"  I had noticed, with some dismay, that my own list of intel-related blogs and sources was a bit outdated and contained a number of now dead links.

Fortunately, my colleagues on LinkedIn, friends and acquaintances from a number of intelligence related email lists and the loyal (long-suffering?) readers of SAM were able to fill the void.  Without further ado, below is the list of all the blogs and other sources we managed to accumulate:



I asked my research assistant, McKenzie Rowland, to organize all the notes and emails and comments into a single user-friendly spreadsheet. We sorted the sites by how often they were mentioned by different people. There were lots of ties, though, so don't take the order too literally. 

Since I sent the announcement to all three of the major intelligence communities (National security, business and law enforcement), McKenzie also included a brief description of what we thought was the primary audience of the blog. We were lucky enough to get a number of non-US sources as well. 

Finally, I do not consider this list exhaustive. If YOUR favorite blog/source on intel didn't make the list, please leave a comment or drop me a note at kwheaton at mercyhurst dot edu!

Tuesday, March 10, 2015

Tired Of Doing Analysis The Same Old Way? Need To Learn Something New? Then You Need To Attend THIS Symposium!

One of the top complaints I hear from analysts is that they do not get enough exposure to new analytic methodologies.  While the pace of technology and information collection has done nothing but accelerate, analysts oftentimes seem to be stuck in a time warp - using the same old methods in much the same old ways.

The Mercyhurst Chapter of the Society of Strategic and Competitive Intelligence Professionals (SCIP) is doing something about that this Spring!

They have put together a one-day symposium on April 20 that will walk the attendees through a variety of new or rarely used methodologies that are perfect for business applications.  Covered methods include Social Network Analysis, Geospatial Preparation Of the Environment, Suitability Models and Strategic Group Mapping.  

Beyond the methods covered, the local chapter here has done an outstanding job of bringing in three must-hear keynote speakers:  Michelle Settecase, the Leader of Competitive Intelligence for the Global Markets Division of Ernst and Young; Mike Finnegan, the Manager of Enterprise Risk Intelligence for Target Corporation; and Patrick Daly, the Manager of Competitive Intelligence for Parker Hannifin.

Reduced rate registration is only available until 20 March, so hurry!

Friday, February 27, 2015

Combatting the Mid-Campaign Slump

(I have been writing about what I call "Entrepreneurial Intelligence" (or ENTINT for those who like acronyms...) on and off for a couple of years now.  Part of what I am coming to realize is that everything I do in support of entrepreneurial crowdfunding efforts through Mercyhurst's Quickstarter Project is really just intel.  The "best practices" report below, put together by my Research Assistant, McKenzie Rowland, and focused on dealing with the dreaded mid-campaign slump, is a good example)

When running a crowdfunding campaign, it is common to notice a dip in the activity in the middle of the campaign.  It is so common, in fact, that it has  a name - 'the mid-campaign slump".  

Fortunately, there are a number of tactics that you can apply to overcome this slump and keep your campaign running at a more even pace.  The table below is a ranking of what techniques the majority of crowdfunding advice-givers have found to be the most instrumental in campaign success (It's a big table so be sure to scroll right to see all the columns!).




Overall, the most common approaches are ones designed to make backers feel valued throughout the campaign.  By sending personal messages or emails and keeping them frequently updated with photos and posts, you’re showing them that you value their contribution and that their donation matters.  

Incentives also appear to be a common way to bring in more contributors and funds, and can boost donations when campaign activity is low.  This may encourage current backers to bring in others to the campaign, so it is likely important to tailor your incentives to the core value proposition of the project in order to bring in the most donors possible.  

These efforts can be time-consuming, however, and may result in greater costs to the campaign so it is important to make some estimate, in advance, of the cost (in terms of both time and money) to benefit ratio before pursuing any of them.

Wednesday, January 28, 2015

Collectors! What Three Things Do You Wish Policymakers/Commanders/Analysts Knew About Your Job? (RFI)

Back in the early 90's I was looking at the Balkans.  I had a bunch of info that made me think there was a small, unidentified weapons cache that needed to be confirmed.

I was very proud of myself.  I had narrowed the search area down to about 10 square kilometers.  At the time, I just so happen to be collocated with the imagery collectors so I went down and asked them, "Hey, can you find this cache for me?" I suspected we had the images and I thought it would be a relatively straightforward task.

I already know what all the IMINT collectors out there are thinking.

"What a dumbass!"

And you are right.  I was a dumbass.  But what happened next changed my attitude about intelligence collection activities forever.

The senior photographic interpreter took me over to a light table (yeah, it was that long ago...) and handed me a huge photo and what amounted to a jeweler's loupe.  "Knock yourself out," he said.

It took me only minutes to realize the enormity of the task that I had casually tried to pawn off on the IMINT guys.  Trying to find something so small in an area so large was an incredibly difficult and time consuming affair.

Over my career as an analyst, I was lucky enough to have similar experiences with professionals in other collection disciplines.  Understanding the challenges and capabilities of collectors made me, I think, a better, more efficient analyst.

I am teaching a class this term where I am trying to get my student-analysts to come to many of the same realizations.  Called Collection Operations for Analysts, the goal of the class is to make them more aware of the challenges and capabilities of HUMINT/Primary Source, IMINT, SIGINT, MASINT and even OSINT collectors.

SO...I need your help!  I would really like to give my students the perspective of working collectors.  I am NOT looking for anything classified (of course) or overly technical.  I am looking for the top three things collectors in each of these disciplines really wish that analysts, primarily, but also policymakers, decisionmakers at other levels, commanders with limited intel background and maybe even the general public understood better about their collection discipline.

For example, if I were a SIGINT collector, I think I would want the people I support to have a better feel for just how much stuff there is out there.  The volumes of traffic are huge in this collection discipline and even the largest organizations' ability to collect, process, translate and interpret are incredibly small.  I think if more people had an appreciation for this fact of 21st century communications, some of the stupider things said about SIGINT ... well ... wouldn't get said.

But don't let me put words in your mouth!  This is your chance, collectors!  And I am not just interested in national security collection, either.  I would love to hear from law enforcement and business professionals and even from SAM's international audience!

You can drop a comment below or, if you are uncomfortable with that, drop me an email at kwheaton at mercyhurst dot edu.

Thanks!

Thursday, January 22, 2015

The Media In 2014...From Predictions Made In 2004!

One of my favorite short films back in 2004 was one called "Epic 2014".  It was faux documentary that purported to report on the media scene in 2014.  It walks the viewer quickly through the history of the internet from Tim Berners-Lee up to 2004 (when the film was made) and then it begins to "report"/speculate about what the next ten years will hold.

If you haven't ever watched it or haven't watched it in awhile, take 8 minutes right now to take a look:



There is some silly stuff here (like Google-zon) and the video does not really hint at the rise of stuff like Facebook and Twitter (much less Instagram and Tinder...).

But the takeaway is an eerily prescient statement concerning the current state of the internet:

"At its best, edited for the savviest readers, [the internet] is a summary of the world - deeper, broader and more nuanced than anything available ever before.  But at its worst, and for too many,  [the internet] is merely a collection of trivia, much of it untrue, all of it narrow, shallow and sensational. But [the current state of the internet ] is what we wanted.  It is what we chose."
I don't know of anything that is quite this well done (or this insightful) about the future of the internet over the next 10 years (leave a comment if you do!) but I suspect that much of what we will be looking backwards at will involve new technologies like the one demonstrated in the 2 minute video below from Microsoft:



In case you are curious, the hardware and software capable of doing all this is coming to you next year.

Wednesday, January 7, 2015

What's The Most Difficult Language For English Speakers To Learn?

Once you start looking for them, there are lots of places on the internet that talk about the relative difficulty of learning to speak different languages.  This infographic (courtesy of Voxy.com) sums up what many of these sites are saying:



While I like the infographic, it is impossible to ignore some of the more robust efforts to categorize languages by difficulty. For example, anyone who has been to the Defense Language Institute or the Foreign Service Institute knows that the US government has its own scale that it uses to categorize languages by difficulty.

Even more interesting is the fact that the relative difficulty of the language may or may not correlate with how much extra pay you will receive for speaking a particular language.  For example, compare how difficult it is to learn French with how much extra the Army will pay you each month if you speak French on Page 20 of Army Regulation 11-6...