Showing posts with label Business Intelligence. Show all posts
Showing posts with label Business Intelligence. Show all posts

Wednesday, July 27, 2011

More Data Cake, Please! (EpicGraphic.com)

I would change "data" to "information" and "information" to "intelligence" but other than that, I like this cake!

(Hat tip to Nimalan Paul for the link)

data cake
Image by EpicGraphic

Thursday, May 19, 2011

Why Good Data Isn't Enough (British Medical Journal And The University Of Michigan)

You are briefing the boss today and you are pretty excited.  You were tasked to take a hard look at two different ways of doing the same thing -- the "old way" and the "new way".  The old way was OK but your research clearly shows that the new way is much better.

You stand up in front of the boss.  You know you are speaking a little quickly (you may not even be pausing all that much) and your voice is probably a little higher than it usually is -- but none of that matters.  Your data is rock solid.

In fact, you have even put your great data into a pie graph that clearly identifies the validity of your position.  This is your ace in the hole because you know the boss loves pie graphs.

All of this explains why you are stunned when the boss decides to continue to do things the old way.

Two interesting studies, one quite old and one brand new, explain why what you said mattered far less than how you said it.

http://www.bmj.com/content/318/7197/1527.full

The first study, from 1999,"Influence of data display formats on physician investigators' decisions to stop clinical trials: prospective trial with repeated measure" from the British Medical Journal (hat tip to social network analysis expert Valdis Krebs and his prolific Twittering) asked a number of physicians to look at the exact same data using one of four different visualization techniques -- bar graph, pie graph, chart or "icons".  You can see the four different charts in the picture to the right.  Note:  The test subjects only saw one of these, not all four together at once.

Now, I admit, these charts are a little dense at first.  Basically you have 2 different groups, those who started the study with a good prognosis and those who started the study with a poor prognosis.  You also have those who received the old treatment and those that received the new treatment.

The question was, based on these results, do you continue this study or not?  The doctors involved in the study were all research physicians and used to seeing this kind of data and making these kinds of decisions. 

Despite the fact that the data was exactly the same in all four images and that the data was overwhelmingly in support of the new treatment option, there was a satistically significant difference in the accuracy rate of the physician's decisions based exclusively on how the data was presented.

The least accurate?  Pie and bar graphs.  Charts did OK but the best option was the "icons". 

This kind of iconic chart is probably new to many readers.  It shows the impact of the treatments on every single patient in the study.  While this kind of display yielded the most accurate results in the study, it was also the most disliked by the test subjects.

The overwhelming preference was for the chart, while a minority preferred the bar or pie graphs.  Not only did none of the participants indicate that they preferred the icons, a significant number of them expressed derision at the format in their after action comments.

This study reminds me of a series of studies conducted by Ulrich Hoffrage and Gerd Gigerenzer at the Max Planck Institute in Berlin that demonstrate that expressing statistics using "natural frequencies" (e.g. 2 out of 20 instead of the more common 10%) leads to better understanding and better (i.e. more "Bayesian") reasoning (Jen Lee, Hema Deshmukh and I were able to replicate these results using a typical analytic problem so I believe that this effect is important in the context of intelligence as well).

The second piece of research is from the University of Michigan's Institute For Social Research and is still in pre-publication review.  In what appears to be a very cleverly designed study, researchers looked at 200 telephone interviewers (100 male and 100 female).

They found that interviewers who spoke moderately fast, with lower pitched voices (if male) and with 4 to 5 natural pauses per minute were the most effective at getting people to listen to them.

Combining the results of these studies, it is easy to imagine that the most powerful presentation would be one using icons combined with a proficient speaker.  The opposite (as demonstrated in the story that started this post) could reasonably be expected to perform less well -- even if the information were exactly the same.

As I have said before, like it or not, it is not enough to have good info, you have to be able to communicate it effectively as well.  The flip side of this coin is equally important for intelligence professionals -- we may well be hard-wired to be biased towards high quality forms of communication, even if the quality of the content is second rate.

Monday, January 24, 2011

Where Can I Find Good SITREP Info On Various Countries? (Link List)

http://hisz.rsoe.hu/alertmap/index2.php
An RFI (Request for Information) rolled across my desk this morning asking where to find good SITREP (Situation Report -- this post is likely to be a little acronym heavy...) information for various countries in the world.

The idea, as I understood it, was to be able to prepare and deliver unclassified briefings on the current situation in a country of interest. Some people call these kinds of reports INFOSUMs (Information Summaries) or INTSUM's (Intelligence Summaries) or something else entirely but, whatever you call them (and there are some important technical differences between them all), they fundamentally focus on what happened in an area of interest (geographical or functional) over a standard period of time.

There are a couple of places that I think are particularly good for this kind of information. The first is Google News Alerts. If you are really hardcore, you should use the service in the native language of the country (if available) as well as in English.

Second, I would check out the ReliefWeb pages for the country of interest. While ReliefWeb does not cover every country, if it is a country of interest for the US intel community, it is probably on ReliefWeb as well. The site contains a wealth of hard to find SITREPs from the UN and various humanitarian agencies as well as a very complete map collection.

I also like the International Crisis Group's Crisis Watch. It has the look and feel of an old school intel watch report but the ICG has on the ground reporters and analysts in many countries. In addition to the Crisis Watch report, the ICG also prepares short analytic reports and, on occasion, detailed strategic intel reports on various hotspots around the world.

Finally, while more useful for natural disasters than standard SITREP type info, you might want to look at this near real time map (you can see a screenshot of a portion of it at the top of this post) of all of the stuff going on in the world. It is pretty nifty to look at even if some countries are not as well served as others.

I am sure that everyone has their favorite sites (please leave 'em in the comments!) but thought these sites are particularly useful since they aggregate content.

Enhanced by Zemanta

Friday, August 13, 2010

Does Analysis Of Competing Hypotheses Really Work? (Thesis Months)

The recent announcement that collaborative software based on Richards Heuer's famous methodology, Analysis of Competing Hypotheses, would soon be open-sourced was met with much joy in most quarters but some skepticism in others.

The basis for the skepticism seems to be the lack of hard evidence that ACH actually improves forecasting accuracy.  While this was not the only (and may not have been the most important) reason why Heuer created ACH, it is certainly a question that bears asking.

No matter how good a methodology is at organizing information or creating an analytic audit trail or easing the production burden, etc., the most important element of any intelligence methodology would seem to be its ability to increase the accuracy of the forecasts generated by the method (over what is achievable through raw intuition). 

With a documented increase in forecasting accuracy, analysts should be willing to put up with almost any tedium associated with the method.  A methodology that actually decreases forecasting accuracy, on the other hand, is almost certainly not worth considering, much less implementing.  Methods which match raw intuition in forecasting accuracy really have to demonstrate that the ancillary benefits derived from the method are worth the costs associated with achieving them.

It is with this in mind that Drew Brasfield set out to test ACH in his thesis work while here at Mercyhurst.  His research into ACH and the results of his experiments are captured in his thesis, Forecasting Accuracy And Cognitive Bias In The Analysis Of Competing Hypotheses (full text below or you can download a copy here).

To test ACH, Drew used 70 students divided between a control and an experimental group who were all familiar with ACH.  The groups were asked to research and estimate the results of the 2008 Washington State gubernatorial election between Democrat Christine Gregoire and Republican Dino Rossi (Gregoire won the election by about 6 percentage points).  The students were given a week in September 2008 to independently work on their estimate of who would win the election in November.

The results were in favor of ACH in terms of both forecasting accuracy and bias.  In Drew's words, "The findings of the experiment suggest ACH can improve estimative accuracy, is highly effective at mitigating some cognitive phenomena such as confirmation bias, and is almost certain to encourage analysts to use more information and apply it more appropriately."

The results of the experiment are displayed in the graphs below:
Statistical purists will argue that the results did not meet the traditional 95% confidence interval test suggesting that the accuracy difference may be due to chance. True enough. What is clear, though, is that ACH doesn't hurt forecasting accuracy and, when combined with the other results from the experiment (see below) strongly suggests that Drew's characterization of ACH is correct.

Becasue Drew captured the political affiliation of his test subjects before he conducted his experiment he was able to sort those subjects more or less evenly into the control and experimental groups.  Here again, ACH comes away looking pretty good:
The chart may be a bit confusing at first but the bottomline is that Republicans were far more likely to accurately forecast the eventual victory of the Democratic candidate if they used ACH.  Here again the statistics suggest that chance might play a larger role than normal (an effect exacerbated by the even smaller sample sizes for this test).  At the least, however, these results are consistent with the first set of results and, again, do nothing to suggest that ACH does not work.

Drew's final test is the one that helps clarify any fuzziness in the results so far.  Here he was looking for evidence of confirmation bias -- that is, analysts searching for facts that tend to confirm their hypotheses instead of looking at all facts objectively.  He was able to find statistically significant amounts of such bias in the control group and almost none in the experimental group:
It is difficult for me to imagine a method which worked so well at removing biases that would also not improve forecasting accuracy. In short, based on the results of this experiment, concluding that ACH doesn't improve forecasting accuracy (due to the statistical fuzziness) would also require one to conclude that biases don't matter when it comes to forecasting accuracy. This is an arguable hypothesis, I suppose, but not where I would put my money...

The most interesting part of the thesis, in my opinion, though, is the conclusion.  Here Drew makes the case that the statistical fuzziness was a result of the kind of problem tested, not the methodology.  He suggests that "ACH may be less effective for an analytical problem where the objective probabilities of each hypothesis are nearly equal."

In short, when the objective probability of an event approaches 50%, ACH may no longer have the resolution necessary to generate an accurate forecast.  Likewise, as objective reality approaches either 0% or 100%, ACH becomes increasingly less necessary as the correct estimative conclusion is more or less obvious to the "naked eye". Close elections, like the one in Washington State in 2008 may, therefore, be beyond the resolving power of ACH.

Like much good science, Drew's thesis has generated a new testable hypothesis (one we are, in fact, in the process of testing!).  It is definitely worth the time it takes to read.

Forecasting Accuracy and Cognitive Bias in the Analysis of Competing Hypotheses

Thursday, July 29, 2010

IBM Creates Interactive Map/Infographic Of CIA World Factbook (IBM.com)

IBM, in order to demonstrate some of their latest web based technologies, has taken the data from the CIA's World Factbook and re-mixed it in the form of a stunning, interactive infographic.  

The final product allows the user to much more quickly engage and compare the data for the various countries in the world.  The screenshot to the right does not (as usual) do the product justice.  I have zoomed in on central Africa to show some of the detail but you can just as easily take a look at the whole world and can instantly get a sense of where various regions lie with respect to any of the data the World Factbook contains.  

I strongly recommend you go here to see the full product.  Play around with it; I think you will be impressed.

If you are interested in additional information about IBM's initiative, you can go to the cryptically named IBM ILOG Elixir Blog or to Information Aesthetics, where I first picked up on this product.

Note:  This has been a very good week for maps (See also here and here) ...
Enhanced by Zemanta

Thursday, June 24, 2010

Part 2 -- What Is Strategy And What Are Strategic Decisions? (Teaching Strategic Intel Through Games)

Carl von Clausewitz, painting by Karl Wilhelm ...Image via Wikipedia

"As war is a game through its objective nature, so also is it through its subjective. -- Carl von Clausewitz, On War, Chapter 1.

While there are many definitions of "strategy" and "strategic decisions", for the purposes of this paper, a strategy is an idea or set of ideas about how to accomplish a goal and strategic decisions are ones that typically put at risk a substantial portion of an entity's disposable resources.

Defining strategy broadly is important.  Far too often, strategy is only associated with terms such as "long-term" or "large" and strategic thinking is something accomplished only at corporate headquarters or by generals and kings.

Defining strategic decisions in the context of the resources risked by the entity (person or organization) making the decision puts the role of strategy into perspective.  Under this definition, it is possible for the exact same decision to be strategic in one case and tactical (or even trivial) in another context. 

For example, imagine an individual who owns a successful dry cleaning store.  Deciding to open up another branch of the store in a different part of town is clearly a strategic decision for this owner.  This owner will likely spend many of his disposable resources (time, money, personnel) getting the new branch set up and operating efficiently.  Failure with this new branch would likely impact the old branch as well.

The same decision, to open another branch in the same town by the owner of a chain of 10,000 dry cleaning stores does not have the same strategic quality as in the first case.  In fact, such a decision, in such a large, national organization, might not even be made at the owner’s level.  The percentage of disposable resources placed at risk with this decision is likely much less and it is entirely possible that such a decision would be pushed down to regional or even sub-regional levels.

More importantly, defining strategy in terms of the resources at risk broadens the scope of what arguably constitutes strategic intelligence as well.   Under this definition, strategy is not confined to large, powerful organizations.  Small businesses, police units and even students can have strategies and, in turn, require strategic intelligence to support their decision-making processes.

Next:
What is intelligence and what is the role of intelligence in the formulation of strategy?
Enhanced by Zemanta

Friday, March 12, 2010

Can't Be Both: Visually Displaying Inconsistencies

I like finding inconsistencies. It means that something is wrong with my argument. Inevitably, addressing the inconsistency leads to greater nuance in my analysis. That, I would argue, is a good thing.

One of my most popular recent posts (see the "Top Posts" box to the right) was on form and content and what intelligence products say by the way they look.

Today, I saw this infographic (via FlowingData, by way of the Consumerist, and originating at The Physicians Committee For Responsible Medicine) and it started me thinking along those same lines again:


Assuming the data is accurate (and I have no reason to suspect it), it does a good job of pointing out the inconsistency between what the government says we should eat and what foods the government actually funds (through subsidies).

Understanding how to display these kinds of inconsistencies is important to intel analysts, too. For example, one of the most common inconsistencies leveled against the intelligence community is that it is both all powerful and incompetent. It can't be both (obviously) but how can you capture this inconsistency in a graphic? See my graphic design-challenged attempt below:
I simply used Google to search for the phrase "CIA is all powerful" and "CIA is incompetent" and then made the graph based on the number of hits the two phrases received. It is not rocket science and it is definitely not a very good graphic but it makes the point, I think. Furthermore, I can imagine a much more complex graphic muddying the waters

  • By the way, ADM Blair will be glad to hear, I suspect, that the phrase "DNI is incompetent" yields no hits...Of course, neither does the phrase "DNI is all powerful".
  • Note, too, the powerful new metric I have created: The I-TOT -- the Intelligence is Taking Over The-world index (Hat tip to NGA for the idea).
Lesson learned? Simplicity wins. Simple graphics communicate big inconsistencies in analytic findings best. This is probably pretty obvious to a graphic designer but, for me, it was a bit of an insight.
Reblog this post [with Zemanta]

Wednesday, January 20, 2010

How To Spot An ATM "Skimmer" And Why You Should Care (KrebsOnSecurity)


Skimming is the theft of ATM or credit card information during the course of what appears to be an otherwise legal transaction. ATM skimmers are designed, for example, to acquire the ATM card number and then, through a variety of different devices also acquire the PIN. This allows the thief to collect the data and then use it to get access to the account.

KrebsOnSecurity (via Boing Boing) had a very interesting example of one such skimming device (see picture) with links to pictures of other such devices. A casual search of the internet yielded many, many other examples (including this YouTube video). Lifehacker also linked to a very good PDF by an Australian firm with some detailed info on both the skimmer and the PIN capturing devices.

This type of fraud has been around for some time now and the tricks used by the bad guys continue to get more sophisticated. Despite this, it seems that many people are not aware of the risks. It is worth taking a look at Krebs and the YouTube video simply to be armed with a little bit of info.

Reblog this post [with Zemanta]

Wednesday, July 8, 2009

Interesting Vision Of A Mixed Reality World Courtesy Of MS (YouTube)

Johnny Holland is a pretty cool and very well-written online design magazine. They were the first (that I saw) to pick up on this new vision of a mixed reality courtesy of Microsoft.



I spent a good bit of time last year looking into virtual worlds and thinking about their future. I had a lot of help, of course, from some of the brightest people I know, but, in the end, I came away less convinced that we are about to insert ourselves into the Metaverse than we are headed towards a mixed reality future.

In this vision, lightweight, transparent and mobile devices allow the user to project "overlays" onto the real world in real time. The net effect is a sort of heads-up display that will allow us to optimize our attention. This MS video gives an idea of how this might impact the way we do ordinary activities in a mixed reality world.

Reblog this post [with Zemanta]

Wednesday, June 3, 2009

SCIP Webinar On Evaluating Intelligence (Self-promotion)

I will be conducting a webinar for the Society Of Competitive Intelligence Professionals (SCIP) on evaluating intelligence on 10 JUN 09 at 1200 EST. I will be going over some of the material in the "evaluating intelligence" series of posts I did earlier this year as well as adding some new stuff that has come along since then.

My goal is to make the material and ideas concerning the evaluation of intelligence a bit more accessible (One of the particular advantages of this format is that it allows for questions during and after the presentation).

I have not done a webinar before so it should be interesting. You can find out more info here: Webinar -- Evaluating Intelligence

(Note: The webinar is not, unfortunately, free. SCIP is a non-profit organization but needs to -- I am assuming -- cover its costs for setting up and running the webinar. I am not charging anything for my time and am getting no compensation for this event.)

Reblog this post [with Zemanta]