Friday, May 4, 2012

Entry-Level Intel Analysts In Law Enforcement Jobs Report Now Available For Download!


Last fall, we began a quest to capture the hiring prospects over the next 12 months for entry-level intelligence analysts (the kind we produce) in each of the three major sub-disciplines of intel:  National security, law enforcement and business.

Today we are putting out the 2012 Entry-Level  Hiring Projections for Law Enforcement Intelligence report (click on the link to download).

The report was compiled by the same analyst who produced the national security report to rave reviews, Whitney Bergendahl.

This report contains, in addition to Whitney's analysis, the collected wisdom of all the hiring managers and intelligence professionals who took our survey on job prospects.  I would particularly like to thank the International Association of Crime Analysts and the International Association of Law Enforcement Intelligence Analysts for circulating our survey among their members.

Of course, we welcome your feedback (send it directly to me or leave a comment below).   For those of you interested, in December, 2011, we put out the national security report and you can still find the details here.  We look forward to publishing the last installment on the business market for jobs next week some time.

Monday, April 16, 2012

Modern Spies, An Excellent BBC Documentary

One of our sharp-eyed alums just informed me of an excellent new BBC series called Modern Spies.  It appears to be focused primarily on the HUMINT side of the business but it does include interviews from active officers in MI6, MI5, the FBI and CIA.  The full series does not appear to be available through the main website to people outside the UK but episode 1 (embedded below) is available through YouTube.


Sunday, April 1, 2012

Advanced Analytic Techniques Is Back! (ADVAT.blogspot.com)

It has been some time (almost two years) but I am back teaching one of my favorite electives - Advanced Analytic Techniques.

This course is unlike any of the other courses I teach.  Rather than focus on a specific body of knowledge, this course allows students to explore their own interests while learning to use, and more importantly, evaluate various analytic techniques used by intelligence professionals.

While each student is hyper-focused on a single technique and topic, each week we take a quick look at a technique that no one in the class is examining; something the class is interested in that we would otherwise not be able to get to.

The week starts with each student going out and finding relevant articles from peer reviewed journals and elsewhere which they then summarize and post to the Advanced Analytic Techniques blog.  Each student then reads the summary and votes on whether or not they thought the article was "interesting" or not.  They are also required to post a couple of comments in order to get the dialogue going and to give the original poster some feedback.

From these articles, we are trying to get a sense of the technique -- How to describe it, what are the technique's strengths and weaknesses, how to actually use it in practice, etc.  We are also trying to begin to evaluate the technique.  We are not trying to evaluate the technique in general, though.  Rather, we are trying to evaluate the technique with respect to its utility in intelligence analysis.

Specifically, we are looking to see if the technique actually improves forecasting accuracy, if it is relatively simple (or, at least, if complex, does that complexity pay off with remarkably better results), can it be used across intelligence disciplines (i.e. is it flexible), if it works well with the kinds of unstructured data typical to intelligence analysis and, finally, if the technique facilitates the communication of the results to a decisionmaker.

Once we get into class, one of the teams conducts an exercise utilizing the method.  The exercise is designed primarily to give us a feel for how the technique works in practice.  Due to time constraints, we typically try to keep this exercise focused on the core elements of the technique.

Finally, we put together the posts that summarize what we have learned about the technique over the week.  Since I have a fairly large class (large, at least for a graduate seminar...), I have two teams that work mostly independently on their posts.  Comparing these two views of the same topic, based on the same journal articles and the same exercise, but with often dramatically different interpretations, is often a learning experience to itself.

So far this term we have looked at

  • Multi-criteria Intelligence Matrices
  • Decision Trees
  • Role-playing
I would certainly encourage anyone interested in intelligence analysis techniques to follow along and comment as you deem fit.   We certainly don't claim to be experts in any of these techniques that we briefly examine and review each week and we would welcome your contributions.  Likewise, if you are new to these techniques yourself, this is a great place to start and learn more! 

Friday, March 23, 2012

Part 13 - The Whole Picture (Let's Kill The Intelligence Cycle)

Part 9 -- Departures From The Intelligence Cycle
Part 10 -- The New Intelligence Process 
Part 11 -- The New Intelligence Process:  The First Picture 
Part 12 -- The New Intelligence Process:  The Second Picture 



In the end, whether you accept this new model of the intelligence process or not, it is clear that the hoary image of the intelligence cycle needs to be put to rest.  Whether you would do that with full honors or, as I advocate, with the use of explosives, is irrelevant.  The cycle, as should be clear by now, needs to go.

To summarize, the cycle fails on three counts at least:  We cannot define what it is and what it isn't, it does not match the way intelligence actually works in the 21st Century and it does not help us explain our processes to the decisionmakers we support.  Efforts to fix these flaws have not worked and, furthermore, this is all widely recognized by those who have studied the role and impact of the cycle. 

In addition, the community of intelligence professionals (and I include academics who study intelligence in this group) will have to be the ones to lay the cycle to rest.  Not only does no one else care, but also the community of intelligence professionals has, as the WMD report noted, "an almost perfect record of resisting external recommendations." 

Yes, the interregnum will be difficult.  The decisionmakers we support, the professionals with whom we work and the students we teach will all ask -- and deserve -- good answers.  These answers will come slowly at first.  In fact, at the outset, we may only be able to "teach the controversy", as it were.

Hopefully, over time, though, the need for a new vision of the intelligence process will drive intellectual curiosity and, through the iterative process of creation and destruction, something more robust will emerge; an improved model that will stand the tests of the next 60 years.   While I have clearly already placed my bets in this regard, I will be happy if the community of intelligence professionals merely recognizes the need to move beyond its historical constraints, accepts this siren's call for what it is, plugs its ears and sails off in a new direction - any direction.

Because anything would be better than continuing to pretend that the world has not really changed since the 1940's.   Anything would be better than continuing to spend countless wasted hours explaining and attempting to justify something that should have been retired long ago.  Anything, in short, would be better than continuing to lie to ourselves.

Wednesday, March 21, 2012

Part 12 -- The New Intelligence Process: The Second Picture (Let's Kill The Intelligence Cycle)

Part 9 -- Departures From The Intelligence Cycle
Part 10 -- The New Intelligence Process 
Part 11 -- The New Intelligence Process:  The First Picture


(Note:  I started this series of posts many months ago with the intent of completing it in short order.  Life, as it so often does, got in the way...  If you are new to the series or you have forgotten what the excitement was all about, I recommend beginning at the beginning.  For the rest of you, thank you for your patience!)


At the highest level, intelligence clearly supports the decisionmaking process.  Understanding this is a first step to understanding what drives intelligence requirements and what defines good intelligence products.  This is the message of the first picture.

But what about the details?  Broad context is fine as far as it goes, but how should the modern intelligence professional think about the process of getting intelligence done?  The second picture is designed to answer these questions.
The Second Picture

The single most important thing to notice about this image is that it imagines intelligence as a parallel rather than as a sequential process.  In this image of the process, there are four broad themes, or sub-processes, moving across time from a nebulous start to a fuzzy finish, with each theme rising to a high point in terms of emphasis at different points in the process.  Also intended by this image is the idea that each theme constantly reflects back and forth among the other three, influencing them as they each influence each other at every point in time.

Let me anticipate an initial objection to this picture -- that the intelligence process has a "start" and a "finish".  The intelligence function, to be sure, is an ongoing one and this was one of the implied lessons of the first picture.  Having made that point there, here I think it is important to focus on how intelligence products are actually generated.  In this respect, clearly, there is a point at which a question (an intelligence requirement) is asked.  It may be indistinct, poorly formed or otherwise unclear, but the focus of an intelligence effort does not exist in any meaningful way until there is a question that is, in some way, relevant to the decisionmaking process the intelligence unit supports.

Likewise, there is a finish.  It may take place in an elevator or in a formal brief, in a quick email or in a 50 page professionally printed and bound document, but answering those questions, i.e. the dissemination of the intelligence product, in whatever form, signifies the end of the process.  Yes, this process then begins immediately anew with new questions, and yes, there are always multiple questions being asked and answered simultaneously but neither observation invalidates the general model.

What of the sub-processes, though?  What are they and how do they relate to each other?  The four include mental modeling, collection of relevant information, analysis of that information and production (i.e. how the intelligence will be communicated to the decisionmakers).

Mental Modelling


Until intelligence becomes a process where machines exclusively speak only to other machines, the mental models carried around by intelligence professionals and the decisionmakers they support will be an inseparable part of the intelligence process.  While most intelligence professionals readily acknowledge the strengths and weaknesses of human cognition, one of the most important qualities, in my mind, of this model is that it embeds these strengths and weaknesses directly into the process and acknowledges the influence of the human condition on intelligence.

These mental models typically contain at least two kinds of information, information already known and information that needs to be gathered.  Analysts rarely start with a completely blank slate.  In fact, a relatively high level of general knowledge about the world has been demonstrated to significantly improve forecasting accuracy across any domain of knowledge, even highly specialized ones (Counter-intuitively, there is good evidence to suggest that high degrees of specialized knowledge, even within the domain under investigation does not add significantly to forecasting accuracy).   

The flip side of this coin is psychological bias, which has a way of leading analysts astray without them even being aware of it.  An extensive overview of these topics is beyond the scope of this post but it is safe to say that, whether implicit or explicit, these models, containing what we know, what we think we need to know and how our minds will process all this information, emerge as the intelligence professional thinks about how best to answer the question.   

Typically, at the outset of the intelligence process is is this modeling function that receives the most emphasis.  Figuring out how to think about the problem, understanding what kind of information needs to be collected and identifying key assumptions in both the questions and the model are necessary to some degree before the other functions can begin in earnest.  This is particularly true with a new or particularly complex requirement.  Furthermore, this modeling function is often informal or even implicit.  It is rare, in current practice, to see the mental model on which collection is planned and analysis conducted made explicit.  This is unfortunate since making the model explicit has proven, if done properly, to accelerate the other sub-processes, limit confusion within a team and produce more accurate forecasts.

Modeling should go on throughout the entire intelligence process, however.  As new information comes in or analysis gets produced, the model may well grow, shrink or morph as the concepts and the relationships between those concepts become more clear.  At some point (typically early) in the intelligence process, however, the emphasis shifts away from modeling and towards collecting, analyzing and producing.  While mental modeling doesn’t become unimportant, it does begin to lose importance as less time is devoted to modeling and more to the other three functions. 

Collection
 
Typically, the next sub-process to take precedence is collection.  Again, as with modeling, collection begins almost as soon as a rudimentary requirement forms in the mind of the intelligence professional.  People naturally begin to draw on their own memories and, if the question is complicated enough, begin to look for additional information to answer the question.  In more complex questions, where the information needs are clearly higher, the intelligence professional even comes up with a collection plan and tasks others to collect the information in order to help address the requirement. 

Collection, like modeling, never stops.  Intelligence professionals will continue to collect information relevant to the particular requirement right up to the day the final product is published.  In fact, collection on a particularly difficult problem (i.e. almost all of them) will often continue after publication.  Decisionmakers and analysts alike want to know if they were correct in their key assumptions, how accurate the final product was and all understand a need to continue to track particularly important requirements over time.  

All that said, collection does tend to lose importance relative to other functions over time.  Economists call this diminishing returns and it reflects a general rule that collection efforts, when considered across the entire spectrum of activities, from no knowledge about a subject to the current level of knowledge about a subject, typically add less and less genuinely new information over time.  Again, this is not to say that collection becomes unimportant, it is simply a reflection of the fact that other processes tend to increase in importance with respect to collection at some point in the process.

Analysis

The next sub-process to take precedence is analysis.  As with both modeling and collection, analysis begins almost immediately.  Tentative answers leap to mind and, in simple cases or where time is a severe constraint, these initial responses may have to do.  Analysis doesn’t really move to the forefront, however, until the requirement is understood and enough collection has taken place for the analyst to sense that adequate information exists to begin to go beyond tentative analyses and take a crack at answering the overall question or questions.

Analysis is where the raw material of intelligence, information, gets turned into products that address the decisionmaker’s requirements.  It is also the task most fraught with difficulties.  From the type of information used (typically unstructured) to the methods used to analyze this information to the form of the final product, analysts face enormous practical and psychological difficulties.  While the goal is clear – reduce the decisionmaker’s level of uncertainty – the best ways to get there are often unclear or rely on untested or poorly tested methods. 

Production

The final sub-process is production (which, for our purposes here, also includes dissemination).  As with all the other functions, it, too, begins on day one.  It is clearly, however, the least important function at the outset of the intelligence process.  Still, intelligence professionals do give some thought (and experienced professionals have learned to give more than a little thought) up front to the form and nature of the final product at the beginning of the process.  

Requirements typically come with an implied or explicit “deliverable” associated with them.  Is the answer, for example, to the intelligence requirement to be in the form of a briefing or is it to be a written report?  Knowing this at the outset helps the intelligence professionals tasked with answering the requirement to plan and to identify items along the way that will make the production of the final product easier.  For example, knowing that the final product is to be a briefing, gives the intelligence professionals associated with the project time to identify relevant graphics during the project rather than going back and finding such graphics at the last minute.  Likewise, if the final briefing is to be a written document, the time necessary to write and edit such a product might be substantial and this, in turn, would need to be factored into the planning process.

Production is an incredibly important but often under-appreciated function within the intelligence process.  If intelligence products are not accessible, i.e. packaged with the decisionmaker in mind, then they are unlikely to be read or used.  Under such circumstances, all of the hard work done by intelligence professionals up to this point is wasted.  On the other hand, there is a fine line between making a document or other type of intelligence report accessible and selling a particular position or way of thinking about a problem.  Intelligence professionals have to steer clear of those production methods and “tricks” that can come across as advertising or advocacy.  Production values should not compromise the goal of objectivity.

Likewise, some intelligence professionals associate high production values with pandering to the decisionmaker.  These professionals see adding multimedia, graphics, color and other design features to an intelligence product to be unnecessary “chrome” or “bling”.  These professionals, many from earlier generations, think that intelligence products “should stand on their own” and that the ease with which such “tricks” are used in modern production is not an excuse to deviate from time-honored traditions in production. 

The guiding principle here, of course, is not what the intelligence professional thinks but what the decisionmaker the intelligence professional is supporting thinks.  Some decisionmakers will, of course, prefer their intelligence products in a simple text-based format.  Others, including many business professionals, will want less text and more supporting data, including charts and graphs.  Some (and the demand for this may well increase in the future) will want their reports in a video format for use on their personal multimedia device. 

Intelligence professionals in general, then, will need to have a wider variety of production skills in the future and, while production concerns do not take precedence until closer to the end of the project, the need to think about them at some level permeates the entire project.

Next:  The Whole Picture