Showing posts with label futures. Show all posts
Showing posts with label futures. Show all posts

Wednesday, October 20, 2021

Is It OK To Sell Eggs To Gophers?

Apparently not...

...At least according to a recently launched experiment in ethical artificial intelligence (AI).  Put together by a number of researchers at the Allen Institute for AI, Ask Delphi lets you submit a plain English question and get a straight answer.  









It does pretty well with straightforward questions such as "Should I rob a bank?"  







It also appears to have some sense of self-awareness: 









It has surprisingly clear answers for at least some paradoxes:






And for historically profound questions of philosophy:






And these aren't the only ways it is clearly not yet perfect:








None of its imperfections are particularly important at this point, though.  It is still a fascinating experiment in AI and ethics.  As the authors themselves say, it "is intended to study the promises and limitations of machine ethics and norms through the lens of descriptive ethics. Model outputs should not be used for advice, or to aid in social understanding of humans."

I highly recommend it to anyone interested in the future of AI.  

For me, it also highlights a couple of issues for AI more generally.  First, the results are obviously interesting, but it would be even more interesting if the chatbot could explain its answers in equally straightforward English.  This is likely a technical bridge too far right now, but explainable AI is, in my opinion, not only important but essential to instilling confidence in human users as the stakes associated with AI go up. 

The second issue is how will AI deal with nonsense?  How will it separate nonsense from questions that simply require deeper thought, like koans?  There seems to still be a long way to go but this experiment is certainly a fascinating waypoint on the journey.

Monday, May 10, 2021

The Future Is Like A Butler

Imagine someone gave you a butler. Completely paid for. No termination date on the contract. What would you do?

At first, you’d probably do nothing. You’ve never had a butler. Outside of movies, you’ve probably never seen a butler. You might even feel a little nervous having this person in the room with you, always there, always ready to help. 

Once you got over your nervousness, you might ask the butler to do something simple, like iron your shirts or make you some coffee. “Hey,” you might think after a while, “This is pretty nice! I always have ironed shirts, and my coffee is always the way I like it!” 

Next, you’d ask your butler to do other things, more complicated things. Pretty soon, you might not be able to imagine your life without a butler.

The parable of the butler isn’t mine, of course. It is a rough paraphrasing of a story told by Michael Crichton in his 1983 book, Electronic Life. Crichton, more famous today for blockbusters like Jurassic Park, The Andromeda Strain, and WestWorld, was writing about computers, specifically personal computers, back then. Crichton correctly predicted that personal computers would become ubiquitous, and the main goal of Electronic Life was to help people become more comfortable with them. 

The story of the butler was a launching point for his broader argument that personal computers were only going to get more useful with time, and that now was the time to start adopting the technology. It worked, too. Shortly after I read his book, I bought my first computer, a Commodore 64.

Today’s Army faces much the same problem. The difference, of course, is that the future presents today’s military with a much broader set of options than it did in 1983. Today, it feels like the Army has been given not one but hundreds of butlers. Quantum computing, artificial intelligence, synthetic biology, 3D printing, robotics, nanotech, and many more fields are arguably poised to rapidly and completely change both the nature and character of warfare.


Despite the deluge of options, the question remains the same, “What do I do with this?”

The answer begins with Diffusion of Innovations theory. In his now classic book of the same name, Everett Rogers first defined the theory and the five types of adopters. Innovators, who aggressively seek the “next big thing”, are the first to take up a new product or process. Early adopters are the second group. Not quite as adventurous as the innovators, the early adopters are still primarily interested in acquiring new technology. Early majority and late majority adopters sit on either side of the midpoint of a bell-shaped adoption curve and represent the bulk of all possible adopters. Finally come the laggards, who tend to adopt a new innovation late or not at all.
(Source: BlackRock White Paper)

For example, the uptake of smartphones (among many other innovations) followed this pattern. In 2005, when the smartphone was first introduced, only 2% of the population (the Innovators) owned one. Three years later, market penetration had only reached 11%, but, from 2009-2014, the smartphone experienced double digit growth each year such that, by 2016, some 81% of all mobile phones were smartphones. This S curve of growth is another aspect predicted by Diffusion of Innovations theory.

Not all innovations succeed, however. In fact, all industries are littered with companies that failed to achieve critical mass in terms of adoption. While there are many reasons that a venture might fail, management consultant Geoffrey Moore, in his influential book, Crossing the Chasm, states that the most difficult leap is between the early adopters and the early majority. Early adopters tend to be enthusiastic and eager to try the next big thing. The early majority is more pragmatic and is looking for a solution to a problem. This difference in perspective accounts for much of the chasm.
Source:   Agile Adoption Across the Enterprise – Still in the Chasm


The Army is aggressively addressing the innovation and early adoption problem by developing sophisticated plans and tasking specific units and organizations to implement them. The need to innovate is, for example, at the heart and soul of several recent policy announcements, including the 2019 Army People Strategy and the 2019 Army Modernization Strategy. Beyond planning, the Army is already far along in doing some of the hard work of innovating. Indeed, organizations and projects as small as TRADOC’s Mad Scientists and as large as the Army Futures Command Synthetic Training Environment are examples that show that Army senior leaders understand the need to innovate and are acting now to put early adoption plans into motion.

But what about the rest of the Army? The part of the Army that isn’t directly involved in innovation? The part that is not routinely exposed to the next big thing? That hasn’t, to get back to the original point, ever had a butler?

Again, Diffusion Of Innovations theory provides a useful guide. Rogers talks about the five stages of the adoption process: Awareness, persuasion, decision, implementation, and continuation. For the rest of the Army, awareness, and, to a lesser extent, persuasion, should be the current goal. 

While this may seem simple, in a world of hundreds of butlers, it is deceptively so. With so many technologies poised to influence the Army of the future, it becomes extremely difficult to focus. Likewise, merely knowing the name of a technology or having some vague understanding of what it is and what it does is not going to be enough. No one in the Army would claim that you could learn to fire a rifle effectively merely by watching YouTube videos, and the same holds true for technologies like autonomous drones, 3D printing, and robots.

The only way to engender true understanding of both the strengths and weaknesses of an innovation is to provide a hands-on experience. Cost alone should not be a significant impediment to exposing the bulk of the Army to the technologies of the future. Autonomous drones are now available for under $1000, entry level 3D printers can be had for as little as $200-$700, virtual reality headsets are available for $300-1000 and build your own robot kits are available for a couple of hundred bucks

None of these products are as sophisticated as the kinds of products the Army is considering, of course, but putting simpler versions of these technologies in the hands of soldiers today would likely significantly improve the Army’s odds of being able to cross Moore’s chasm between visionary thinking and pragmatic application in the future.

How and where should the Army implement this effort to familiarize the force with the future? Fortunately, the Army has a good place, a good concept, and some prototypes already in place--at the library. The Army library system contains over 170 libraries worldwide. While many people continue to think of libraries as silent spaces full of dusty books, the modern library has been re-imagined as a place not only for knowledge acquisition but also as tech centers for communities.

Nowhere is this more clear than in the “makerspaces” that are increasingly woven into the fabric of modern libraries. Typically offering access to equipment that, while relatively inexpensive, is outside the budget of most households, or to technology that is best first experienced in a hands-on, peer learning environment, makerspaces allow users to try out new technologies and processes at the user’s own pace and according to the user’s own interest. 

3D printers, laser cutters, video and podcasting equipment are often combined in these makerspaces with more sophisticated traditional equipment such as high end, programmable sewing machines. Most times, however, the makerspace has been tailored by the local librarians to meet the needs of the population that the library serves. Indeed, the Army already has at least three examples of makerspaces in its library system, the Barr Memorial Library at Fort Knox, the Mickelsen Community Library at Fort Bliss and The Forge at the US Army War College.

Imagine being able to go to the post library and check out an autonomous drone for the weekend? Or to sit down and 3D print relief maps of the terrain you were going to cover on your next hike? Understanding the basics of these new technologies will not only make the future force more comfortable with them but also allow soldiers to think more robustly about how to employ these technologies to the Army’s advantage.

While the cost of such a venture would be reasonable, acquiring the funding for any effort on the scale of the whole Army cannot be taken for granted. More challenging, perhaps, would be the process of repurposing the space, training staff, and rolling out the initiative. 

But what is the alternative? To the extent that the Army, as the 2019 People Strategy outlines, needs people at all levels “who add value and increase productivity through creative thinking and innovation,” it seems imperative that the Army also have a whole-of-army approach to innovation. To fail to do so risks falling into Moore’s chasm, where the best laid plans of the visionaries and early adopters fall victim to unprepared pragmatists that will always make up the bulk of the Army.

Wednesday, December 9, 2020

The BPRT Heuristic: Or How To Think About Tech Trends

A number of years ago, one of my teams  was working on a series of technology trend projects.  As we looked deeply at each of the trends, we noticed that there was a pattern in the factors that seemed to be influencing the direction a particular tech trend would take.  We gave that pattern a name:  the BPRT Heuristic.  

Tech trends are always interesting to examine, so I wanted to share this insight to help you get started thinking about any developing or emerging techs you may be following.  

Caveat:  We called it a heuristic for a reason.  It isn't a law or even a model of tech trend analysis.  It is just a rule of thumb--not always true but true enough to be helpful.
  • B=the Business Case for the tech.  This is how someone can make money off the tech.  Most R and D is funded by companies these days (this was not always the case).  These companies are much more likely to fund techs that can contribute to a revenue stream.  This doesn't mean that a tech without an obvious business case can't get developed and funded, it just makes it harder.
  • P=Political/Cultural/Social issues with a tech.  A tech might be really cool and have an excellent business case, but because it crosses some political or social line, it either goes nowhere or accelerates much more quickly than it might normally.  Three examples:  
    • We were looking at 3G adoption in a country early in the 2000's.  There were lots of good reasons to suspect that it was going to happen, until we learned that the President's brother owned the 2G network already in existence in the country.  He was able to use his family connections to keep competition out of the country.  
    • A social factor that delayed adoption of a tech is the story of Google Glass in 2013.  Privacy concerns driven by the possibility of videos taken without consent led to users being called "Glassholes."  Coupled with other performance issues, this led to the discontinuation of the original product (though it lives on in Google's attempts to enter the augmented reality market).  
    • Likewise, these social or cultural issues can positively impact tech trends as well.  For example, we have all had to become experts at virtual communication almost overnight due to the COVID crisis--whether we wanted to or not.
  • R=Regulatory/Legal issues with the tech.  The best example I can think of here is electromagnetic spectrum management.  Certain parts of the electromagnetic spectrum have been allocated to certain uses.  If your tech can only work in a part of the spectrum owned by someone else, you're out of luck.  Some of this "regulation" is not government sponsored either.  The Institute of Electrical and Electronics Engineers establishes common standards for most devices in the world, for example.  For example, your wifi router can connect to any wifi enabled devices because they all use the IEEE's 802.11 standard for wifi.  Other regulations come from the Federal Communications Commission and the International Telecommunications Union.
  • T=The tech itself.  This is where most people spend most of their time when they study tech trends.  It IS important to understand the strengths and weaknesses of a particular technology, but as discussed above, it might not be as important as other environmental factors in the eventual adoption (or non-adoption...) of a tech.  That said, there are a couple of good sources of info that can allow you to quickly triangulate on the strengths and weaknesses of a particular tech:
    • Wikipedia.  Articles are typically written from a neutral point of view and often contain numerous links to other, more authoritative sources.  It is not a bad place to start your research on a tech.  
    • Another good place is Gartner, particularly the Gartner Hype Cycle.  I'll let you read the article at the link but "Gartner Hype Cycle 'insert name of tech here'" is almost always a useful search string  (Here's what you get for AI for example...).  
    • Likewise, you should keep your eye out for articles about "grand challenges" in a particular tech (Here is one about grand challenges in robotics as an example).  Grand Challenges outline the 5-15 big things the community of interest surrounding the tech have to figure out to take the next steps forward.  
    • Likewise, keep your eyes out for "roadmaps."  These can be either informal or formal (like this one from NASA on Robotics and autonomous systems).  The roadmaps and the lists of grand challenges should have some overlap, but they are often presented in slightly different ways.
Obviously, the BPRT Heuristic is not the answer to all your tech trend questions.  In providing a quick, holistic approach to tech trend analysis it does, however, allow you to avoid many of the problems associated with too much hype.  

Thursday, January 2, 2020

How To Think About The Future: A Graphic Prologue

(Note:  I have been writing bits and pieces of "How To Think About the Future" for some time now and publishing those bits and pieces here for early comments and feedback.  As I have been talking to people about it, it has become clear that there is a fundamental question that needs to be answered first:  Why learn to think about the future?

Most people don't really understand that thinking about the future is a skill that can be learned--and can be improved upon with practice.  More importantly, if you are making strategic decisions, decisions about things that are well outside your experience, or decisions under extreme uncertainty, being skilled at thinking about the future can significantly improve the quality of those decisions.  Finally, being able to think effectively about the future allows you to better communicate your thoughts to others.  You don't come across as someone who "is just guessing."    

I wanted to make this case visually (mostly just to try something new).  Randall Munroe (XKCD) and Jessica Hagy (Indexed) both do it much better of course, but a tip of the hat to them for inspiring the style below.  It is a very long post, but it is a quick read; just keep scrolling!

As always, thanks for reading!  I am very interested in your thoughts on this...)































Monday, November 18, 2019

Chapter 2: In Which The Brilliant Hypothesis Is Confounded By Damnable Data

"Stop it, Barsdale!  You're introducing confounds into my experiment!"
A little over a month ago, I wrote a post that asked if the form of an estimative statement mattered in terms of communicating its content with regard to analytic confidence.  Specifically, I asked people to determine which of the following was "more clear" in response to the question, "Do you think the Patriots will win this week?":
  • "It's a low confidence estimate, but the Patriots are very likely to win this week."
  • "The Patriots are very likely to win this week.  This is a low confidence estimate, however."
I posted this as an informal survey and 72 people kindly took the time to take it.  Here are the results:



At first glance, the results appear to be less than robust.  The difference measured here is unlikely to be statistically significant.  Even if it is, the effect size does not appear to be that large.  The one thing that seems clear is that there is no clear preference.

Or is there?


Just like every PHD candidate who ever got disappointing results from an experiment, I have spent the last several weeks trying to rationalize the results away--to find some damn lipstick and get it on this pig!


I think I finally found something which soothes my aching ego a bit.  The fundamental assumption of these kinds of survey questions is that, in theory, both answers are equally likely.  Indeed, this sort of A/B testing is done precisely because the asker does not know which one the client/customer/etc. will prefer.

This assumption might not hold in this case.  Statements of analytic confidence are, in my experience, rare in any kind of estimative work (although they have become a bit more common in recent years).  When they are included, however, they are almost always included at the end of the estimate.  Indeed, one of those who took the survey (and preferred the first statement above) commented that putting the statement of analytic confidence at the end, "is actually how it would be presented in most IC agencies, but whipsaws the reader."

How might the comfort of this familiarity change the results?  On the one hand, I have no knowledge of who took my survey (though most of my readers seem to be at least acquainted in passing with intelligence and estimates).  On the other hand, there is some pretty good evidence (and some common sense thinking) that documents the power of the familiarity heuristic, or our preference for the familiar over the unfamiliar.  In experiments, the kind of thing that can throw your results off is known as a confound.

More important than familiarity with where the statement of analytic confidence traditionally goes in an estimate, however, might be another rule of estimative writing and another confound:  BLUF.

Bottomline Up Front (or BLUF) style writing is a staple of virtually every course on estimative or analytic writing.  "Answer the question and answer it in the first sentence" is something that is drummed into most analysts' heads from birth (or shortly thereafter).  Indeed, the single most common type of comment from those that preferred the version with the statement of analytic confidence at the end was, as this one survey taker said, "You asked about the Patriots winning - the...response mentions the Patriots - the topic - within the first few words."
Note:  Ellipses seem important these days and the ones in the sentence above mark where I took out the word "first."  I randomized the two statements in the survey so that they did not always come up in the same order.  Thus, this particular responder saw the second statement above (the one with the statement of analytic confidence at the end) first.
If the base rate of the two answers is not 50-50 but rather 40-60 (or worse in favor of the more familiar, more BLUFy answer) then these results could easily become very significant.  It would be like winning a football game you were expected to lose by 35 points!

Thus, like all good dissertations, the only real conclusion I have come to is that the "topic needs more study."

Joking aside, it is an important topic.  As you likely know, it is not enough to just make an estimate.  It is also important to include a statement of analytic confidence.  To do anything less in formal estimates is to be intellectually dishonest to whoever is making real decisions based on your analysis.  I don't think that anyone would disagree that form can have a significant impact on how the content is received.  The real questions are how does form impact content and to what degree?  Getting at those questions in the all important area of formal estimative writing is truly something well-worth additional study.

Tuesday, October 1, 2019

Analytic Confidence And The New England Patriots: A Hypothesis

"Don't try to stop me!  I'm having a thought!"  (Image Source)
I was driving to work this morning, thinking about analytic confidence (as one does), and I had a thought.  An interesting thought.  Before I tell you what it was, you need to take the one question survey at the link below to see if my thought has any merit (I will post the results as a follow-up to this post):

Which statement seems more clear to you?

Did you take the survey?  If not, go back and take it!

And now?  

OK!  Thanks!

People are often confused by the difference between an estimate and confidence in that estimate.  This confusion is driven, to a very large part, by the way the terms are often (mis)used in formal analytic writing.  It is not uncommon to see someone talk about their confidence when they are really making an estimate or, less commonly, to use estimative language to convey confidence.  

The two concepts, however, are very different.  The estimate communicates what you think is likely (or unlikely) to happen in the future.  Confidence speaks to the likelihood that something is mucking up the process used to establish that estimate.  

This is where the New England Patriots come in.  For example, I think it is very likely that the New England Patriots will win their next game (Note:  I am using the term "very likely" here the same way the DNI does).  I watch football but am by no means an expert.  I don't even know who the Patriots are playing next week.  I just know that they are usually a good team, and that they usually win a lot of games.  So, while I still think it is very likely that the Patriots will win, my confidence in that estimate is low.  The process I used for deriving that estimate was so weak, I won't be surprised to find out that they have a bye next week.

On the other hand, it is easy to imagine a forecaster who is steeped in football lore.  This hypothetical forecaster has an excellent track record derived in large part from a highly structured and efficient process for determining the odds of a victory.  This forecaster might say exactly the same thing I did--the Patriots are very likely to win their next game--but, because of a superior process, this forecaster has high confidence in their estimate.

It is important to convey both--the estimate itself and analytic confidence--when communicating the results of analysis to a decisionmaker.  To do anything less runs the risk of the decisionmaker misinterpreting the findings or assuming things about the process that are not true.  

It is also important to note that the "analytic confidence" mentioned here differs significantly from the far more commonly discussed notion of psychological confidence.  Psychological confidence is a statement about how one "feels" and can often be caused by cognitive bias or environmental factors.  There is no reliable relationship between forecasting accuracy and psychological confidence.  

Analytic confidence, on the other hand, is based on legitimate reasons why the analysis is more likely to be correct.  For example, analysis derived from facts presented by reliable sources is more likely to be correct than analysis derived from sketchy or disreputable sources.  In fact, there are a number of legitimate reasons for more rather than less analytic confidence (you can read about them here).

It is, of course, possible, for analytic and psychological notions of confidence to be consistent, at least in the context of an individual forecast.  I, for example, "feel" that I have no reason to be confident in my estimate about the Patriots.  I also know, as I go down the list of elements responsible for legitimate analytic confidence, that very few are present.  Low applies to both my psychological and analytic variants of confidence, in this case.

That is not normal.  Overconfidence bias is typically the cause of feelings of confidence outpacing a more rational assessment of the quality of the analytic process.  Underconfidence, on the other hand, is typically caused by over-thinking a problem and is more common among experts than you might think.

Now to my thought.  Finally.

One of the big problems with analytic confidence is communicating it to decisionmakers in an intuitive way.  Part of this problem occurs, no doubt, because of the different meanings the word "confidence" can have.  Most people, when they hear the word "confidence" used in casual conversation, assume you mean the psychological kind.  Adding the word "analytic" in front of "confidence" doesn't seem to help much, as most people don't really have a notion of what analytic confidence is or how it differs from the more commonly used, psychological type of confidence (They don't want to know, either.  They have enough to remember).

The classic solution has been to ignore analytic confidence completely.  This is wrong for all the reasons discussed above.  Occasionally, however, analysts elect to include a statement of analytic confidence, typically at the end of the analysis.  Part of this is due to the "Bottomline Up Front (BLUF)" style of writing that is common to analysis.  The logic here is that the most important thing is the estimate.  That becomes the bottomline and, therefore, the first thing mentioned in the paper or briefing.

What if we flip that on its head?  What if we go, at least in casual conversation, with the analytic confidence first?  

Thus you had my two formulations:
  • "It's a low confidence estimate, but the Patriots are very likely to win this week."
  • "The Patriots are very likely to win this week.  This is a low confidence estimate, however."
These two statements say exactly the same thing in terms of content.  However, I think the form of the first statement better communicates what the analyst actually intends.  In other words, I think the first statement establishes a slightly different context.  Furthermore, I think this context will likely help the listener interpret my use of the word "confidence" correctly.  That is, the first statement is better than the second at suggesting that I am using confidence as a way to highlight the process I used to derive the estimate and not just how I feel about it.  

Another reason I think the second statement is inferior is because I think it sounds confusing to the casual listener.  It is theoretically better (the bottomline is definitely up front) but, unless you are steeped in the arcana of analytic writing, it cannot be easily interpreted and could lead to confusion.

That's the reason for the quick poll.  I just wanted to see what you thought--to see, in the words of Gertrude Stein, if there was any there there.  

Thanks and I will post what I found (and my inevitably shocked reaction to it) in a later post.

Monday, September 9, 2019

What Is A "Gray Rhino" And How Do I Tackle One? (+ That Time I Died For 7 Seconds)

A perfectly ordinary gray rhino. 
You still wouldn't want to be surprised by it
.
By Krish Dulal - Own work, CC BY-SA 3.0,
https://commons.wikimedia.org/w/index.php?curid=12888627
I am taking a break today from my series on How To Think About The Future to talk about a new term I just heard:  The Gray Rhino.

A Gray Rhino is basically the opposite of a Black Swan.  It is a high impact, high probability event that not enough people are paying attention to.  

A good example of this may be the recent advances in the biological sciences.  When I began my current job, I asked 20 of the best thinkers I know, "What is the most under-hyped, under-rated technology or trend?"  I wanted to understand what I might be missing, what I should be examining more carefully.

I was surprised at the number of people who came back and said, in one form or another, "Biology."  Whether it is the prospects (and horrors) of gene editing, immunotherapiesmycorrhizal networks, bacterial manipulation, our understanding of the brain, or our ability to create whole new brains from scratch (!), advances in the biological sciences do seem poised to revolutionize our lives, yet it does not seem to get as much attention as other trends like artificial intelligence.  This is a Gray Rhino.  Something that is almost certain to happen, will have a massive impact when it does, but is not getting the attention it deserves.

Not everything is either a Black Swan or a Gray Rhino, however.  A good example may be the hurricane, Dorian, that recently leveled the Bahamas before causing all sorts of havoc up the east coast of the US.  The forecasting models did a good job of estimating where the hurricane would go, and when it would get there.  Likewise, the sheer size of the thing communicated just how devastating it was going to be.  While there are always people who cannot afford to leave the path of a hurricane (or have nowhere to go) or those foolish few who choose to ride it out for the hell of it, most people gave the storm the attention it deserved and did what they could to take appropriate precautions.

As I think about the problem of how to deal with true Gray Rhinos, though, it seems to me that this is not primarily a problem of collection or analysis.  Researchers have enough info in these situations, and they understand it well enough, at least, to raise the issue(s).

It appears to me to be, instead, a problem of production or, more accurately, communication.  Specifically, I think it is related to the Confidence Heuristic.  A heuristic is a fancy word for a rule of thumb but a rule of thumb with a slight difference.  A rule of thumb is often learned (see the video below for an example). 



A heuristic, on the other hand, has developed over evolutionary time scales and is hardwired into the architecture of the brain.  The Confidence Heuristic says that, all other things being equal, we tend to accept the logic/reasoning/forecasts of other people who are confident in their logic/reasoning/forecasts.  We are biologically predisposed to believe those who are confident in their own beliefs.  What is more important is that studies have shown that this is not necessarily a bad rule.  People who are genuinely confident are often right.  

For example, I remember the afternoon I died for seven seconds (It was less dramatic than that sounded...).  Fortunately, I was in one of the best possible places to die for a brief period of time--a hospital.  I had suffered several dizzy spells the day before and had been admitted for observation and had been hooked up to a portable EKG.  When my heart stopped due to sick sinus syndrome, the docs were able to see exactly what had happened.  Shortly after I came around, a cardio surgeon (who I had never met) walked in with the readout, showed it to me, and said, "This buys you a pacemaker."

As they wheeled me to OR, I remember asking the doctor, "How many of these have you done?"  She said, with absolute confidence, "Hundreds," and then she looked me dead in the eye and told me, "This is a piece of cake."

Her confidence in her skills was infectious.  I believed her, and because I did, I went into surgery with no worries and came out of it successfully.  She was correct to be confident as well.  She had, in fact, done hundreds of these surgeries, and for the last five years, this little piece of biotech (with its eight year battery!) has kept me alive without any real issues.  

Politicians, TV hucksters, and other con artists, on the other hand, may not know about the Confidence Heuristic but they sure know how to use it!  Speaking confidently and in absolute rather than nuanced terms is the hallmark of almost every political speech and all of the hours of editorial commentary masquerading as news shows.  Nuance is used to cast doubt on the other side's position while confidence is required to promote your own position.  
(Note:  This, coupled with Confirmation Bias and the Dunning-Kruger Effect, explains much of the internet.)
In other words, Gray Rhinos likely exists because of the way Gray Rhino communities of interest choose to talk about Gray Rhinos.  Measured tones, nuanced forecasts, and managed expectations are the language of science and (much of) academia.  Hyperbole, bold predictions, and showmanship generate the buzz, however.  

What to do if you find yourself working on a Gray Rhino problem?  Hiring a frontman to hype your rhino is likely excessive and can get you into real trouble (See Theranos and MIT Media Lab for a few cautionary tales).  That said, developing a relationship with the press, being able to explain your research in layman's terms, and celebrating the genuine "wins" in your field as they come along, seems to make sense.

Finally, if you do decide to go the frontman route (and remember, I don't recommend it), at least get a guy like this:

Monday, August 26, 2019

How To Think About The Future (Part 3--Why Are Questions About Things Outside Your Control So Difficult?)

I am writing a series of posts about how to think about the future.  In case you missed the first two parts, you can find them here:

Part 1--Questions About Questions
Part 2--What Do You Control

These posts represent my own views and do not represent the official policy or positions of the US Army or the War College, where I currently work.

*******************

Former Director of the CIA, Mike Hayden, likes to tell this story:

"Some months ago, I met with a small group of investment bankers and one of them asked me, 'On a scale of 1 to 10, how good is our intelligence today?'" recalled Hayden. "I said the first thing to understand is that anything above 7 isn't on our scale. If we're at 8, 9, or 10, we're not in the realm of intelligence—no one is asking us the questions that can yield such confidence. We only get the hard sliders on the corner of the plate. Our profession deals with subjects that are inherently ambiguous, and often deliberately hidden. Even when we're at the top of our game, we can offer policymakers insight, we can provide context, and we can give them a clearer picture of the issue at hand, but we cannot claim certainty for our judgments." (Italics mine)
I think it is important to note that the main reason Director Hayden cited for the Agency's "batting average" was not politics or funding or even a hostile operating environment.  No.  The #1 reason was the difficulty of the questions. 

Understanding why some questions are more difficult than others is incredibly important.  Difficult questions typically demand more resources--and have more consequences.  What makes it particularly interesting is that we all have an innate sense of when a question is difficult and when it is not, but we don't really understand why.  I have written about this elsewhere (here and here and here, for example), and may have become a bit like the man in the  "What makes soup, soup?" video below...




No one, however, to my knowledge, has solved the problem of reliably categorizing questions by difficulty.

I have a hypothesis, however.

I think that the AI guys might have taken a big step towards cracking the code.  When I first heard about how AI researchers categorize AI tasks by difficulty, I thought there might be some useful thinking there.  That was way back in 2011, though.  As I went looking for updates for this series of posts, I got really excited.  There has been a ton of good work done in this area (no surprise there), and I think that Russel and Norvig in their book, Artificial Intelligence:  A Modern Approach, may have gotten even closer to what is, essentially, a working definition of question difficulty.

Let me be clear here.  The AI community did not set out to figure out why some questions are more difficult than others.  They were looking to categorize AI tasks by difficulty.  My sense, however, is that, in so doing, they have inadvertently shown a light on the more general question of question difficulty.  Here is the list of eight criteria they use to categorize task environments (the interpretation of their thinking in terms of questions is mine):
  • Fully observable vs. partially observable -- Questions about things that are hidden (or partially hidden) are more difficult than questions about things that are not.
  • Single agent vs. multi-agent -- Questions about things involving multiple people or organizations are more difficult than questions about a single person or organization.
  • Competitive vs. cooperative -- If someone is trying to stop you from getting an answer or is going to take the time to try to lead you to the wrong answer, it is a more difficult question.  Questions about enemies are inherently harder to answer than questions about allies.
  • Deterministic vs. stochastic -- Is it a question about something with fairly well-defined rules (like many engineering questions) or is it a question with a large degree of uncertainty in it (like questions about the feelings of a particular audience)?  How much randomness is in the environment?
  • Episodic vs. sequential -- Questions about things that happen over time are more difficult than questions about things that happen once.
  • Static vs. dynamic -- It is easier to answer questions about places where nothing moves than it is to answer questions about places where everything is moving.
  • Discrete vs. continuous -- Spaces that have boundaries, even notional or technical ones, make for easier questions than unbounded, "open world," spaces.
  • Known vs. unknown -- Questions where you don't know how anything works are much more difficult than questions where you have a pretty good sense of how things work.  
Why is this important to questions about the future?  Two reasons.  First, it is worth noting that most questions about the future, particularly those about things that are outside our control, fall at the harder rather than easier end of each of these criteria.  Second, understanding the specific reasons why these questions are hard also gives clues as to how to make them easier to answer.  

There is one more important reason why questions can be difficult.  It doesn't come from AI research.  It comes from the person (or organization) asking the question.  All too often, people either don't ask the "real" question they want answered or are incredibly unclear in the way they phrase their questions.  If you want some solutions to these problems, I suggest you look here, here and here.  

I was a big kid who grew up in a small town.  I only played Little League ball one year, but I had a .700 batting average.  Even when I was at my best physical condition as an adult, however, I doubt that I could hit a foul tip off a major league pitcher.  Hayden is right.  Meaningful questions about things outside your control are Major League questions, hard sliders on the corner of the plate.  Understanding that, and understanding what makes these questions so challenging, is a necessary precondition to taking the next step--answering them.

Next:  How Should We Think About Answers?