Take a fingerprint... for that matter, go ahead and take a palm print. Now, take a voiceprint. In this day and age, forensic biometric analysis is extraordinarily complex. In a world where we analyze everything from irises to earlobes, what can science tell us about voice? One increasingly popular form of analysis is forensic speaker recognition (aka voice biometrics or biometric acoustics). Forensic speaker recognition (FSR) has unequivocal potential as a supplementary analytic methodology, with applications in both the fields of law enforcement and counterterrorism (for details, see the last section of the 2012 book on FSR Applications to Law Enforcement and Counter-terrorism). The utility of the FSR process is either one of identification (1:N or N:1) or verification (1:1).
1:N Identification -- Imagine you have a recording of a voice making threats over the phone. The speaker identification process allows you to query a database of acoustic recordings of known suspects for comparison against your target voice to identify more threats he/she might have made.
N:1 Identification -- Imagine you have a bunch of voice recordings and you want to know in which of them, if any, a certain speaker participates.
1:1 Verification -- Imagine you wish to grant someone access to a building or secure location by assessing whether or not they are who they say they are (this aspect of speaker recognition is less applicable to analysis and more applicable to security).
That said, the CIA, the NSA and the Swiss IDIAP all turned to automatic speaker verification systems in 2003 to analyze the so-called Osama tapes (for details of the approach, see Graphing the Voice of Terror). This case provides an excellent opportunity to note the distinction between automatic speaker recognition performed by an algorithmic machine and aural speaker recognition performed by acoustic experts.
The cornerstone methodology supporting forensic speaker recognition is voiceprint analysis,or spectrographic analysis, a process that visually displays the acoustic signal of a voice as a function of time (seconds or milliseconds) and frequency (hertz) such that all components are visible (formants, harmonics, fundamental frequency, etc.).
(Note: For those who are more acoustically inclined and would enjoy a well-written read on all things acoustic from military strategy to frog communication, Seth Horowitz's new book The Universal Sense: How Hearing Shapes the Mind comes with my highest recommendation.)
Spectrographic analysis differs from human speaker recognition in that it provides a more quantifiable comparison between two speech signals. Under favorable conditions, both approaches yield favorable results: 85 percent identification accuracy (McGehee 1937), 96 percent accuracy (Epsy-Wilson 2006), 98 percent accuracy (Clifford 1980), 100 percent accuracy (Bricker and Pruzansky 1966). These approaches, however, do not come without caveats.
Forensic speaker recognition has many limitations and is currently inadmissible in federal court as expert testimony. Bonastre et al (2003) summarize these limitations quite well:
"The term voiceprint gives the false impression that voice has characteristics that are as unique and reliable as fingerprints... this is absolutely not the case."
The thing about voices is that they are susceptible to a myriad of external factors such as psychological/emotional state, age, health, weather... the list goes on. From an application standpoint, the most prominent of these factors is intentional vocal disguise. There are a number of things people can intentionally do to their voices to drastically reduce the ability of machine or human expert to identify their voice correctly (you would be amazed at how difficult it is - nearly impossible - to identify a whispered voice). Under these conditions, identification accuracy falls to 40 - 52 percent (Thompson 1987), 36 percent (Andruski 2007), 26 percent (Clifford 1980).
Top: Osama bin Laden's "dirty" 2003 telephonic spectrogram
Bottom: Osama bin Laden's "clean" spectrogram
Source: Owl Investigations
More problematic still is communication by telephone. Much of the input law enforcement and national security analysts have to work with comes from telephone wiretaps or calls made from jail cells. Telephones, cellphones in particular, create a filtering phenomenon of an acoustic signal, whereby all acoustic information under a certain frequency simply does not get transmitted (within this frequency range lie some of the key characteristics for voice identification).
While the forensic speaker recognition capability has come a long way since 2003, the consensus among the analytic community remains that it is not a stand-alone methodology, rather a promising supplementary tool. Biometric analysis was also a topic brought to the Intelligence Technology panel of the 2013 Global Intelligence Forum conference this year. Of note was the expanding applicability and increasing capabilities of all biometric technologies.
Thus far, the Spanish Guardia Civil is the only law enforcement agency worldwide to have a fully-operational acoustic biometric system (called SAIVOX, the Automatic System for the Identification of Voices). In the Spanish booking process, just like we take fingerprints, they take voice samples that they then contribute to a corpus of over 3,500 samples linked with well-known criminals and certain types of crime.
In 2011, the FBI commissioned NIST to launch a program on "investigatory voice biometrics." The goal of the committee is
to develop best practices and collection standards to launch an operational
voice biometric system with robust enough corpora so as to serve as a useful
tool in ongoing investigations, modeled off the Spanish system. (This is an ongoing project and you can read the full report here).
FSR is not a perfect methodology, but one that can add substantial value on a case-by-case basis. It is of high interest to the US national security and law enforcement analytic communities.
Additional reading:
Andruski,
J., Brugnone, N., & Meyers, A. (2007). Identifying disguised voices through
speakers' vocal pitches and formants. 153rd ASA meeting.
Bonastre,
J. F., Bimbot, F., Boe, L. J., Campbell, J. P., Reynolds, D. A., &
Magrin-Chagnolleau, I. (2003). Person authentication by voice: A need for
caution. Eurospeech 2003.
Sequester, draw-down, RIF, early retirement - these are the buzzwords that are dominating the hiring discussions in the US government these days. While last year was marginal, the next 12 months are shaping up to be very tough ones for entry-level intelligence analysts trying to break into the US national security intelligence community.
Greg Marchwinski, one of our current crop of graduate student all-stars, prepared this year's report which, like last year's report, is based on collected survey data, interviews and emails from knowledgeable individuals, and relevant secondary sources.
Here is what Greg has to say about the prospects for the next 12 months (from the executive summary):
"Due to uncertainty over federal government deficit reduction initiatives and a decreasing military presence globally, it is highly likely that overall hiring of entry-level intelligence analysts within the US Intelligence Community (IC) will decrease significantly from recent levels until the next budget cycle begins in October, 2013. The only exception to this general trend is cyber-related positions which are likely to see a moderate increase despite budget cuts. Additionally, it is highly likely that sequestration throughout the IC will significantly limit hiring entry-level intelligence analysts in all analytic functions until defense funding negotiations are resolved."
Six more days... If all goes well, in six days, I intend to launch my first game, Widget, on Kickstarter (if you are interested in the game itself, you can get more info by clicking on this link). Once launched, I will have 30 days to get "funded" by various "backers". If I fail to reach my pre-designated goal, I get nothing at all.
That's how crowdfunding works. Well, at least, that is how Kickstarter works. Kickstarter is the oldest and most popular crowdfunding platform currently available. For those of you unfamiliar with these platforms, you probably should be. This is not just for entrepreneurs or people interested in entrepreneurial intelligence, either. There are implications here for intelligence professionals at all levels of business ... and law enforcement and national security, too.
Let me explain. Kickstarter is by no means the only crowdfunding site these days. IndieGoGo and RocketHub are two popular alternatives, but there are a growing number of these sites. The pace of this growth is likely to increase in 2013 as new laws are set to come into effect that will allow contributors to take small equity interests in start-ups (the current crowdfunding model centers on what typically amounts to pre-sales of a product or service). Forbes expects revenues generated by crowdfunding sites to double, from 3 to 6 billion USD, in 2013. Increasingly, this is the way everything from music to games to books to electronics will be funded at start-up.
Savvy intelligence professionals in the business world should be watching these sites for potential competitors that might emerge from successfully funded projects. Likewise, given the underfunded nature of most start-ups, it makes equally good sense to see successful crowdfunded projects as a no-cost extension of your own R & D programs - buying out small companies with proven products may well be less expensive than developing them in-house. Finally, understanding crowdfunding is going to be an increasingly essential element of providing entrepreneurs any meaningful intelligence support. Law enforcement intelligence professionals should also be taking note. While there is little evidence of criminal activity in crowdfunding activities to date, with an increase in money, crime is virtually certain to follow. Fraud of all kinds, money laundering, illegal or unsafe products are all activities which the reputable sites work very hard to avoid but, with the expected growth, criminals will inevitably carve out new niches in the market. While national security intelligence professionals might not be interested in some artist's new comic book, many of these sites either specialize in or actively promote innovative hi-tech product development and design. In fact, some academics are even turning to crowdfunding sites to fund their research. Everything from geo-mapping to verification of information on the internet to pens that can draw in 3D have been funded by crowdfunding. Crowdfunding is both an innovative and disruptive practice that is set to become much more important in the coming years. In my opinion, it is worth staying ahead of this particular curve.
That's right, the Institute of Intelligence Studies is sponsoring a professional development conference for its alumni, students and faculty this summer from 8-11 July at Mercyhurst University.
This is not an alumni "weekend", "retreat" or an "escape". It is not "homecoming". This is a premiere opportunity to hone your skills as an intelligence professional at the same place where you all got your start -- Mercyhurst.
We are bringing in top notch speakers (many of which are former classmates!) and are going to offer an extraordinary opportunity to network both inside and outside your chosen field (Quick: How many Mercyhurst intel alums currently work in the banking industry? In energy? In almost any field you can name? You will be surprised at the number!) and across the generations (We have been doing this for 20 years - I know, I find it hard to believe myself!).
If you need more enticement, the faculty will all be here to bring you up to speed on our current research activities (AKA "What we have learned about intel since you've been gone...") and where we are going in the future. With over 1000 alumni, 100's of research projects, dozens of full and part time faculty, and the opportunities represented by the new building, these are results you will want to know about.
This isn't just about us, though. We also want to learn from you, too. Our program is 20 years old this year and it is time to plot our course for the next 20 years. What are the problems you are having? What research do you need done to improve your ability to provide actionable intelligence to the decisionmakers you support? What skills and abilities do the intelligence analysts of tomorrow need today? Getting your input at this conference is crucial.
If you are a current student or alum you can register here to attend. I personally look forward to seeing each and every one of you!
A couple of weeks ago there was a discussion on the always interesting US Army INTELST regarding schemes for grading sources. I pushed my own thoughts on this out to the list and published a link list on SAM that contained much the same information.
One of the topics that came up as a result of that discussion was "Whatever happened to the old A-F, 1-6 method for evaluating the accuracy and reliability of a source?" Under this system, the reliability of a source of a piece of info was graded A-E, with "A" being completely reliable and "E" being unreliable. "F" was reserved for sources where reliability could not be determined.
Likewise, the info was graded for accuracy on a scale of 1-5 where "1" indicated that the info was confirmed and "5" indicated that it was improbable. "6" was reserved for info the truth of which could not be judged.
Under this system, every piece of collected info had a unique identifier (B-3, C-2, A-1 -- now you know where that expression came from!) that supposedly captured both the reliability of the source and the accuracy of the info.
Except that it didn't work.
In 1975, Michael G. Samet conducted a series of experiments using the system for the US Army's Research Institute for the Behavioral and Social Sciences titled, Subjective Interpretation of Reliability And Accuracy Scales For Evaluating Military Intelligence. I ran across it while doing some background research for the link list. Unfortunately, the good people at NTIS had not had the time to scan this report and upload it yet. Even more maddening was the fact that the abstract (the only thing available) included details about the study but not the !@#$ results.
So, I had to send away to NTIS for a hard copy. I have uploaded it to Scribd.com to make this important piece of research more generally available.
The study asked about 60 US army captains familiar with the scoring system to evaluate 100 comparative statements. The results were pretty damning:
"Findings of the present study indicate that the two-dimensional evaluation should be replaced because: 1. The accuracy rating dominates the interpretation of a joint accuracy and reliability rating and 2. There is frequently an undeniable correlation between the two scales."
You can read the full study below or download it from here.
All of this raises another issue, though. It seems that every 20 years or so the US national security intel community takes a crack at validating its methods and processes. Sherman Kent talks about one such effort in the 50's and then, again, in the 70's and early 80s there seems to have been another attempt (the report referenced here is an example). We seem to be entering into another such era given some of the language coming out of IARPA.
For some reason, however, just when things get good, the effort peters out. When these efforts peter out in the intel community, however, the results become almost impossible to find. Not having this research on hand and, frankly, online, means that the government will inevitably pay for the same research twice (the questions don't go away just because we forget what the answers are...) and researchers will be forced to start from scratch even though they don't have to.
I won't repeat my rant from a few days ago, but finding and keeping track of this kind of stuff seems to be a perfect task for academe and the kind of thing the DNI ought to fund (Hint, hint...).
(Note: At the risk of making this an all-Jeff-Welgan blog, I thought this week I would cover Jeff's thesis work on the effects of labels on analysis right on the heels of last week's discussion of his work embedded in the new book, Hyperformance).
Does a name matter? Shakespeare says, "No, a rose by any other name would smell as sweet" but most psychologists would disagree. The well known "framing effect" shows that the way a question is asked can determine how people will answer it. Likewise, psychological campaigns aimed at dehumanizing an enemy often accompany wars.
Jeff Welgan, in his thesis called, The Effects Of Labels On Analysis, tests these ideas in the realm of intelligence analysis. Some of you may remember taking Jeff's survey last year. In it, he presented a fictitious scenario set in the Horn of Africa. Each participant was asked to read an identical report of an activity. The only thing that changed was the word used to describe the group conducting the activity. Specifically, Jeff tested the words "group", "insurgent", "rebel", "militia", or "terrorist". He hypothesized that the specific word used would affect the analytic conclusions that participants would draw.
Jeff did not aim his study at a random sample of the general population, however. He took pains to engage analysts in the national security realm, in law enforcement or in business. The results in the image to the right are self-reported (the inevitable cost of a web-based survey...) but he was fairly careful in his approach to getting participants. In all, some 233 of you participated in the experiment (Many thanks!).
Despite his hypotheses, it was unclear what he would actually find. These psychological biases are deep-seated and robust but, on the other hand, there is good research to suggest that credible evidence helps overcome framing issues and intel analysts are typically trained to be on the lookout for sources of bias. As Jeff stated, "My thesis will examine to what extent the quality of analysis is at risk, if it is indeed at risk, as the differing connotations of these labels would suggest."
In the end, the labels wound up making little difference for trained intel analysts. As Jeff bluntly stated, "My hypothesis that these particular labels have significant meaning, and many individuals have a preconceived idea, or cognitive biases, regarding the kinds of actions each of these particular groups conduct must be rejected at this time due to an overall lack in statistical significance across the labels."
This is clearly good news for the intel community at large. It certainly suggests that at least some of the training to defeat at least some of the cognitive biases is working.
The full text of the thesis is below or can be downloaded from Scribd.com.
One of the questions that has really bothered me over the years concerns the size of the national security intelligence "industry" worldwide. When you add it all up, how much money do the states of the world spend on intelligence and how many people are involved in government intelligence work?
These questions are important. There is a popular impression in much of the world that "intelligence is everywhere", that it is both all powerful and omnipresent. Creating or encouraging this impression in dictatorial countries might even be part of the system of repression. Knowing the answers to these questions could help reformers more accurately assess their risks.
Even in democratic countries, however, understanding the resource limits of the national security intelligence apparatus at the broadest possible levels, where the need for citizens to know where their money is being spent can be appropriately balanced with the legitimate operational concerns of the working intelligence professional, seems to make sense.
From a more provincial standpoint, it also seems important for educational institutions to have some sort of a feel for the need for trained professionals in intelligence work if the university model is ever going to supplant government training as the primary way into the intelligence communities of the world.
The answer to this question, however, is obviously difficult to uncover. Most countries do not want to discuss how much they spend on intel each year. Oftentimes, it is even difficult to figure out which organizations within a country are actively engaged in intelligence work.
It is with great pleasure, then, that I announce the final results:
The national security intelligence industry accounts for about $106 billion dollars a year and employs about a million people worldwide.
While Chris has done a good (extraordinary, really) job of collecting as many facts and figures as he could regarding the intel budgets of every country on the planet, he had to rely on estimates for many of them.
These estimates are based on GDP and on the spending patterns of countries where the data is available, a method which Chris readily admits is fraught with some difficulty (I note with some interest, though, that Chris has posted a note to his online thesis encouraging people to send him more accurate figures. It will be interesting to see how many people take him up on the offer...).
I am also sure that Chris has missed some organizations. It is virtually certain that there are organizations out there which are well known to people living in a particular country to be wholly controlled by that country's intelligence apparatus for which Chris has not accounted. Such errors are essentially unavoidable given the global scope of his thesis work.
Likewise, Chris simply did not have time to examine either the growing presence of intelligence units in law enforcement or business (My own guess is that this would approximately double the total value of the industry).
All that said, this thesis does exactly what needed to be done -- give us all a starting point for further research and refinements.
A full copy of the thesis is located below or you can go to Chris's site on Scribd.com for other viewing and download options.
Two of our students (Thanks, Mike and John!) recently pointed me towards some interesting resources if you are looking for a job in the US national security intelligence community.
The first is a new resume database sponsored by the Director of National Intelligence. The service allows job-seekers to submit their resumes one time and have it be available to the entire intelligence community for up to a year after submission.
I don't see this as a replacement for more traditional ways of applying for jobs in the US IC but it is a welcome addition. I think it will help the IC most in those crisis or hard to fill positions (like if something major goes down in Burkina Faso and you are one of the few speakers of dioula in the US...).
The other resource was actully put together by John. It is a list of links to all of the contractors he could find who, according to John, currently "have openings for either intelligence analysts, cyber analysts, or both." The links go directly to the career/jobs page of the 40+ companies he identified so there is no needless hunting and pecking through some corporate website for the right place to search for positions and apply.
Imagine a brilliant piece of intelligence analysis -- well-researched, well-written and actionable. Now imagine that same report written in an 8 point Gothic font over multiple pages with half inch margins. No title, no paragraphs, no sub-sections, no indentations; just a single block of text. Would you read it? Would anyone else?
Point 1:Form matters. How we say something is often as important, if not more important, than what we say. __________________
Now, take a look at this video:
It is a fake. It was originally created with some off the shelf software by a CGI artist and then modified by someone to look like a NASA video. Here is the original:
The most distressing thing about the two videos, however, is not the fakery. It is the number of views. Again, you have to go to the YouTube sites to confirm this but the original has only 23,000 or so views while the fake has over 150,000 views.
Furthermore, cleverly modified videos are not the only way to twist, spin, modify and deceive. Check out FactCheck.org's Whoppers of 2009 for other ways that people have cleverly manipulated the form of the message to lie to us.
Which leads to Point 2: It is getting easier and easier to lie with form. ___________________
Richards Heuer pointed out in his classic, The Psychology of Intelligence Analysis, that "once information rings a bell, the bell cannot be unrung." He was capturing a phenomena that is well known to psychologists: People continue to act as if a piece of information were true even after the piece of information has been proven to be false.
Over and over again, people have been put in experiments that make them falsely believe that they have a capacity to do something -- distinguish the effect of risk-taking and success as a firefighter, for example -- that they do not have. Even after they have been shown conclusive proof that the experiment has been manipulated to give the subjects the impression that they have an ability they do not, in fact, have, these subjects continue to act as if the original information were correct.
All that is bad enough but when you combine this psychological effect with the power of visualization, you get an absolutely scary combination. Check this video out:
Which leads to Point 3: Lies persist and visual lies likely persist more strongly than textual lies. ______________________
So what does this all have to do with communicating the results of intelligence analysis?
The US national security intelligence community has been accused of trying to sell its intelligence. The 2005 WMD Commission report accused the intelligence community of this with regards to the President's Daily Brief (PDB): "The daily reports seemed to be ‘selling’ intelligence—in order to keep its customers, or at least the First Customer, interested."
Which leads to Point 4: Good intelligence doesn't "sell" its products. __________________________
When I took my first job as an analyst (back in the 80's...), I didn't make my own slides. PowerPoint was deemed to be too complicated and tricky. It required a specialist, trained in its vagaries, to generate the slides necessary to brief the decisionmakers who pulled my strings.
That did not last long. Very quickly it went from rare to common to expected that analysts would be able to generate their own slides. What's more, today analysts are increasingly being asked to create visuals to supplement or replace the results of what was previously text-based analysis.
Yet, analysts get very little training in appropriate ways to visualize information and virtually no training in how not to lie or mislead with colors and graphics, how to spot photoshopped pictures or fake video, or how to ensure that the form is as objective as the content.
Which leads to My Question: How do we know when we are lying (or misleading) with the form of our intelligence products? ________________________
It seems to me that we spend a good bit of time analyzing text for evidence of bias or puffery or misleading statements. In virtually every intelligence organization of any size, there is a quality control process to ensure that the content -- the words going out the door -- conform to the standards of the agency.
Within the US national security intelligence community these standards are laid out in ICD 203 and I suspect that other intelligence agencies and organizations worldwide have something similar.
But who makes sure the same thing is true for the form?
All of this is a very long precis to an exercise I do in my Intelligence Communications class. In the vast majority of the exercises and assignments in that class, I ask students to focus on the elements of good intelligence communication: Bottom-line up front estimates, concision, clarity, decisionmaker focus, accuracy, etc.
In one exercise, though, I ask them to take a written report and re-imagine it as a primarily visual product. I task them to keep all the elements of a good intelligence product but to visualize those elements rather than put them in print.
Over the years, I have received some wonderfully innovative products. This year was no different. One of the products stood out, however. Nimalan Paul, using online software from Xtranormal.com, created an amusing and compelling animated video that contains virtually exactly the same content as the written product on the same topic.
Before you see the video, I will share the written version of the report with you. It follows the generic form guidance that we use here at Mercyhurst in our intelligence communications classes for written products:
Which report is better at communicating the results of the analysis? One of our grad students actually did a study on this a number of years ago. His findings showed that if you are above a "certain age", the text document is the best at communicating but that if you are below that certain age, then the animation is likely to be more effective.
Beyond the age distinction, what else makes one format better than the other? Is it all personal preference? Is one more "honest" than the other or is one just more traditional?
Finally, if one of these forms is more honest than the other, shouldn't we be teaching how to recognize that difference?
McKinsey, the capo di tutti capi of consulting firms, recently published a fascinating report titled How Companies Are Benefiting From Web 2.0. You have to register with McKinsey to read the full text but it is probably worth it if you are interested in how (and what) Web 2.0 technologies are actually making a difference in the very competitive, global business environment -- and, of course, which technologies appear to be falling out of favor as well.
The coolest thing about the report is the visualization tool they developed to supplement their report. I have a screenshot of one of the views of the data it provides below but that does not do it justice. Click here or on the picture to take you to the fully interactive set of charts and graphs (No registration required to play with the chart...).
The most interesting thing about the report, however, is the implications it holds for the intelligence community and its attempt to bring Web 2.0 technologies into the workplace. According to a report from earlier this year, Web 2.0 is in a midlife crisis within the national security intelligence community. The McKinsey report pretty clearly points to the likely reasons why. Specifically, they identified three major performance factors (ranked by the percentage that each factor made in the average company's success):
"Management capabilities ranked highest at 54 percent, meaning that good management is more than half of the battle in ensuring satisfaction with Web 2.0, a high rate of adoption, and widespread use of the tools. The competitive environment explained 28 percent, size and location 17 percent."
Since management was such a fundamental part of the success or failure of these initiatives, McKinsey then dug into the numbers regarding management and found three critical management related sub-factors:
"Parsing these results even further, we found that three aspects of management were particularly critical to superior performance: a lack of internal barriers to Web 2.0, a culture favoring open collaboration (a factor confirmed in the 2009 survey), and early adoption of Web 2.0 technologies."
Yoikes!
If McKinsey's results are accurate, then a true cynic would say the national security intel community already has three strikes against it. In these circumstances, it is only surprising that Web 2.0 has had any success -- at all.
That view is clearly unfair to the thousands of people who are already successfully working with these technologies inside the national security intelligence community. What would also be unfair, however, is to underestimate the roadblocks that conventional management approaches may be putting in the way of the productivity to be gained from implementing these technologies in intelligence.
Just when I was about to wrap this series up, all of a sudden a new report comes out that contains a number of juicy details about where jobs will be in the Federal Government over the next three years -- including amazingly, specifically, jobs in intel analysis!
The Partnership For Public Service, a well-funded and well-connected non-profit with the stated mission to "revitalize our federal government by inspiring a new generation to serve" put the report together based on information provided by the 35 government agencies that participated.
The level of participation by the intel agencies is unclear. The CIA, for example, did not participate. The Department of Defense (DOD) and all of the services did participate, however, and this is important because nearly all (80% or more) of the intel budget falls under DOD.
Maddeningly, it is unclear what the numbers provided by the services actually represent. For example, I am pretty sure that the numbers on the site are for DOD civilian jobs and do not include military positions but I am not positive. Likewise, I am pretty sure that the DOD numbers include subordinate agencies like DIA and NSA but I cannot be sure since the CIA clearly opted out and these other agencies may have done the same.
Beyond the services, I have other questions about what the numbers actually mean. For example, the line item in the report labels the "Professional Field" as Intelligence Analysis. Does this include anyone associated with the intelligence analysis function? Or just analysts?
"Intelligence agencies expect to hire 5,500 people in the next year and "in the same order of magnitude" over the following two years, according to Ronald P. Sanders, chief human capital officer for the Office of the Director of National Intelligence. Such agencies include the Central Intelligence Agency and the National Security Agency. "It's a combination of how much turnover we expect and how much growth we expect in our budget," Sanders said."
With the data provided by the report and the Post's article, it is possible to go back to the back of the envelope again and do some more analysis.
Any way you add up the numbers you get a reasonably consistent answer to the question of how many intel analysts the federal government needs each year:
If you add up the numbers for the agencies listed needing professionals in the Intelligence Analysis field in this new report, you get 3676 over the next three years (or about 1200 a year).
Previous info suggested that about 17% of the intel workforce are analysts. Using Sanders' 5500 total number, this translates to 935 new-hire analysts (17% of 5500) across the entire national security IC each year for the next three years.
Both these numbers triangulate pretty well with my own, earlier, estimate of about 1000 total (as the median. The range was between 400-2000).
The Department of Justice (DOJ) is undeniably the place to look for jobs as an analyst. More than a third of the total 3676 projected hires in intel analysis over the next three years are coming from DOJ according to the report.
The factors mentioned by Sanders in the Post article were turnover and budget. Intel budgets are likely to remain flat so my guess is that this is mainly due to turnover and that much of this is driven by retirement. There is some support for this in the new report (see the Turnover Tab on the Security And Protection page).
The factor not mentioned was a shift away from contract hires in intel (i.e. closing down contracts and making those workers move to the government side). Contractors currently add another 30+% to the total intel workforce. It is hard to imagine much widespread growth in the contract sector but there is no recent evidence to suggest that there will be a decline next year either (For more info on estimates with regard to contract hires see this post).
Previous info suggests the US national security intelligence community has about 100,000 government employees. 5500 new employees each year based mostly on voluntary turnover translates to roughly a 5.5% turnover rate. If accurate, this places the US national security intelligence community in the same general category as other low turnover rate industries such as biotechnology and other high-tech industries.
Original material on Sources and Methods Blog by Kristan J. Wheaton is licensed under CC BY-SA 4.0. To view a copy of this license, visit https://creativecommons.org/licenses/by-sa/4.0