tag:blogger.com,1999:blog-25697724329531208752024-03-17T23:03:49.313-04:00Sources And MethodsThinking about the future, and, more importantly, how to think about the future.Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.comBlogger917125tag:blogger.com,1999:blog-2569772432953120875.post-80455561595988037092024-02-05T13:57:00.001-05:002024-02-05T13:57:32.661-05:00The Battle of Moore's Chasm And Who Will Win The Next War<p><span style="font-family: georgia;">There is a battle going on right now. It is being fought by every military in the world. </span></p><p><span style="font-family: georgia;">Victory in this battle is crucial. The militaries' on the winning side will likely be on the winning side of the next large-scale war. The losers will likely be forgotten, studied only for the mistakes they made.</span></p><p><span style="font-family: georgia;">This is the Battle of Moore's Chasm.</span></p><p><span style="font-family: georgia;">This battle is taking place everywhere. There are physical manifestations of it in Ukraine, the Taiwan Strait, and Gaza, but there are equally important conceptual and theoretical manifestations of it in the Pentagon, on Arbatskaya Square in Moscow, and deep inside the August 1 Building in Beijing.</span></p><p><span style="font-family: georgia;">What this battle is about and how to win it are the subjects of this article.</span></p><p><span style="font-family: arial;"><b>What Is The Battle Of Moore's Chasm?</b></span></p><p><span style="font-family: georgia;">To understand this battle it is necessary, at first, to travel back to 1962. It was then that a young professor of rural sociology, <a href="https://en.wikipedia.org/wiki/Everett_Rogers">Everett Rogers</a>, published what was to become the second most cited book in all the social sciences, <i><a href="https://www.google.com/books/edition/Diffusion_of_Innovations_5th_Edition/9U1K5LjUOwEC?hl=en">Diffusion of Innovations</a>. </i> </span></p><p><span style="font-family: georgia;">While the book contains much that is still relevant today, the part that is important to the current battle is the idea that the "market" for an idea, an innovation, a new concept, or a technology generally follows a bell curve and that this bell curve can be divided into five major sections of users (See chart below): Innovators, Early Adopters, Early Majority, Late Majority and Laggards. </span></p><p></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhO1rm_QntzWavEmi5LKZJpZAPA4ki2oq5Bq7hRBTlJUWIh3wBp3aFeNusCkOLalWWnj4m278iKQJI-IG4m3xF39rAGP05U3jMtnuqcsM6wHLMuvTxYKT4VltVhGmp6KDCl4JRT-7-7ZgG2mdd6j4Lze9ueCTZdyZUZ4l3PXEay34t4yGlMtm1S3raRWK4/s724/Screenshot%202024-01-08%20124525.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="540" data-original-width="724" height="478" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhO1rm_QntzWavEmi5LKZJpZAPA4ki2oq5Bq7hRBTlJUWIh3wBp3aFeNusCkOLalWWnj4m278iKQJI-IG4m3xF39rAGP05U3jMtnuqcsM6wHLMuvTxYKT4VltVhGmp6KDCl4JRT-7-7ZgG2mdd6j4Lze9ueCTZdyZUZ4l3PXEay34t4yGlMtm1S3raRWK4/w640-h478/Screenshot%202024-01-08%20124525.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Source: https://en.wikipedia.org/wiki/Diffusion_of_innovations</td></tr></tbody></table><br /><span style="font-family: georgia;">Fast forward to 1989, when two researchers at the famous consulting firm, <a href="https://computerhistory.org/profile/regis-mckenna/">Regis McKenna</a>, Inc. (RMI), <a href="https://diffusion-research.org/research_articles/chasm-theory-development/">Warren Schirtzinger and James Lee</a>, hypothesized and then demonstrated that there was a "chasm" between the early adopters and the early majority. </span><p></p><p><span style="font-family: georgia;">This chasm existed largely due to the different motivations of the members of these groups. Innovators and Early Adopters are very much into cool, new things. They tend to be more enamored with the potential of a new technology or process than they are with the utility or scalability of these products. Early and Late Majority motivations, on the other hand, typically have more to do with solving particular problems and doing so at the lowest cost and at a scale that is appropriate for their organization.</span></p><p><span style="font-family: georgia;">Another researcher at RMI, <a href="https://en.wikipedia.org/wiki/Geoffrey_Moore">Geoffrey Moore</a>, picked up on the idea and, in 1991, published what was to become one of the most influential business books ever, <a href="https://www.amazon.com/Crossing-Chasm-3rd-Disruptive-Mainstream-ebook/dp/B00DB3D81G?ref_=ast_author_mpb"><i>Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers</i></a>. Now in its third edition, it has sold over a million copies and is considered by Inc. magazine as one of the top ten marketing books ever written. <a href="https://thinkinsights.net/strategy/crossing-the-chasm/">Think Insights has a good article</a> that lays out the main ideas in detail, but for our purposes, their chart showing the chasm is sufficient:</span></p><p></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHjBlqb4J7qz8XK1xWZclqBp21F_3Y6ppk9UP01CnKk0F8oyngjM1UMjs-KLxpxjJeLLVIGWiLKvXKlo8Pn_K1Pf3WotD-PqGP9k2ARUf1hwXK62j4mn-Qx82fVE4hJzWPbbXqSOah_-heO4d-ChmujgN8yN1yFsC7Y_2vQhqDBBPrsrImsXUAN40H_ug/s1200/Crossing-the-Chasm-1200x675.jpg" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="675" data-original-width="1200" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHjBlqb4J7qz8XK1xWZclqBp21F_3Y6ppk9UP01CnKk0F8oyngjM1UMjs-KLxpxjJeLLVIGWiLKvXKlo8Pn_K1Pf3WotD-PqGP9k2ARUf1hwXK62j4mn-Qx82fVE4hJzWPbbXqSOah_-heO4d-ChmujgN8yN1yFsC7Y_2vQhqDBBPrsrImsXUAN40H_ug/w640-h360/Crossing-the-Chasm-1200x675.jpg" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><div><span style="font-family: georgia; font-size: xx-small;"><span style="background-color: white; color: #333333; letter-spacing: -0.3px; text-align: justify;">Think Insights (January 3, 2024) </span><b style="background-color: white; box-sizing: border-box; color: #333333; letter-spacing: -0.3px; text-align: justify;">Crossing The Chasm – Technology Adoption Lifecycle</b><span style="background-color: white; color: #333333; letter-spacing: -0.3px; text-align: justify;">. Retrieved from </span><em style="background-color: white; box-sizing: border-box; color: #333333; letter-spacing: -0.3px; text-align: justify;"><a href="https://thinkinsights.net/strategy/crossing-the-chasm/" style="box-sizing: border-box; text-decoration-line: var(--awb-link_decoration_line); text-decoration-style: var(--awb-link_decoration_style); text-decoration-thickness: var(--awb-link_decoration_thickness); text-underline-offset: var(--awb-link_decoration_underline_offset); transition-duration: 0.2s; transition-property: text-decoration, color, background-color, border-color; transition-timing-function: linear;">https://thinkinsights.net/strategy/crossing-the-chasm/</a></em><span style="background-color: white; color: #333333; letter-spacing: -0.3px; text-align: justify;">.</span></span></div></td></tr></tbody></table><br /><span style="font-family: georgia;">Most importantly, Moore's Chasm has become synonymous with the place where good ideas go to die. Whether it is a lack of capital, innovator inexperience, or an inability to get traction in the much more lucrative Early and Late Majority markets, failure to bridge the chasm leads, at best, to relegation to a niche market and, at worst, to inevitable decline and bankruptcy. </span><p></p><p><span style="font-family: georgia;">While almost all of these ideas and the literature accompanying the chasm have come out of business journals, it has a direct and immediate correlation with issues faced by militaries around the world. Indeed, <a href="https://www.nationaldefensemagazine.org/articles/2023/10/9/ausa-news-new-chief-of-staff-lays-out-army-focus-areas">Secretary of the Army, Christine Wormuth recently said</a>:</span></p><p><span style="font-family: georgia;"></span></p><blockquote><span style="font-family: georgia;">“This is a crucial moment for the Army to summon our ingenuity, to innovate and invest in emerging technologies, to test and develop in uncharted areas like artificial intelligence and contested domains like space and cyber, to reshape and transform the force to be more adaptable and flexible.”</span></blockquote><p><span style="font-family: georgia;">Yet, across the globe, the difference between how much militaries want to innovate and how much they are actually innovating seems to be heading in the wrong direction. As the <a href="https://www.bcg.com/publications/2023/defense-innovation-readiness-gap-widening">Boston Consulting Group highlighted in its report last year on the defense innovation readiness gap</a>:</span></p><p></p><blockquote><span style="font-family: georgia;">"One of (the report's) most important findings is that the defense innovation readiness gap significantly increased in the year since our first study. Across 10 of the 11 dimensions of readiness assessed, MoDs failed to match their 2021 results, by an average of 8%."</span></blockquote><p><span style="font-family: georgia;">Moreover, there is some evidence to suggest that this chasm exists within the US Department of Defense as well. <a href="https://cset.georgetown.edu/publication/ending-innovation-tourism/">A recent report by the Center for Security and Emerging Technology</a> put it this way:</span></p><p><span style="font-family: georgia;"></span></p><blockquote><span style="font-family: georgia;">"However, under the DOD’s current organizational structure, <i>defense innovation is disconnected from defense procurement. </i>This division limits innovation offices’ ability to affect technological change across the military and excuses procurement offices from integrating cutting-edge capabilities into major systems and platforms." (Italics mine)</span></blockquote><p></p><p></p><p><span style="font-family: georgia;">The Battle of Moore's Chasm is real, and right now, no one is winning.</span></p><p><span style="font-family: arial;"><b>Who Will Fight This Battle?</b></span></p><p><span style="font-family: georgia;">While there are a number of possible ways to win the battle (the CSET paper, for example, references three), all of these courses of action require the right people to implement them. Acquisition officers, policy wonks, commanders, and others all do and will have their role to play. The most important warrior in this battle, however, is the </span><i style="font-family: georgia;">innovation champion.</i></p><p><span style="font-family: georgia;">Developed about the same time as Diffusion of Innovations Theory, the idea of an innovation champion was first put forward by <a href="https://en.wikipedia.org/wiki/Donald_Sch%C3%B6n">Dr. Donald Schön</a> in the Harvard Business Review article, <a href="https://hollisarchives.lib.harvard.edu/repositories/11/archival_objects/3025523">"Champions for radical new inventions."</a> Since then, thousands of articles (Google Scholar says about 2140) have been written about the role, traits, and importance of innovation champions in driving modernization and incorporating emerging technologies across a wide variety of fields. </span></p><p><span style="font-family: georgia;">All of the more modern definitions of innovation champion are similar to the one <a href="https://scholarspace.manoa.hawaii.edu/bitstreams/6bd04dc5-94e9-45f7-ae75-0f5ccbc8cff9/download">developed by researchers at the German Graduate School of Management and Law: </a> "an </span><span style="font-family: georgia;">innovation champion is an individual or a group of </span><span style="font-family: georgia;">individuals who is willing to take risks to </span><span style="font-family: georgia;">enthusiastically promote innovations through the </span><span style="font-family: georgia;">various stages of the development process."</span></p><p><span style="font-family: georgia;"></span></p><div class="separator" style="clear: both; text-align: center;"><span style="font-family: georgia;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1iIlPPJrm3keuG6wQP_C67R6auAbhzIEvjf8RoC0ZT_7L1VY3QRBIwBpW9BQ2gab2hdz9MIgi9qZGX50W_SHW2YH-tXU-mbz1GUW6qzl67wiENklkxw_vjovaPMcrf6-Z1GW1hO7GFNY4xsChgvzMz8rdlVA_S-A8xawLaAQxOCS6516qQxxxRoKDRKs/s328/Screenshot%202024-01-18%20101731.png" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="274" data-original-width="328" height="334" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1iIlPPJrm3keuG6wQP_C67R6auAbhzIEvjf8RoC0ZT_7L1VY3QRBIwBpW9BQ2gab2hdz9MIgi9qZGX50W_SHW2YH-tXU-mbz1GUW6qzl67wiENklkxw_vjovaPMcrf6-Z1GW1hO7GFNY4xsChgvzMz8rdlVA_S-A8xawLaAQxOCS6516qQxxxRoKDRKs/w400-h334/Screenshot%202024-01-18%20101731.png" width="400" /></a></span></div><span style="font-family: georgia;"><br />This same paper identified five skills, seven traits, and three different kinds of knowledge that were characteristic of innovation champions based on a systematic literature analysis looking at 85 of the most influential journal articles on the topic (See image to the left).</span><p></p><p></p><p><span><span style="font-family: georgia;">The approach here is similar to the <a href="https://rdl.train.army.mil/catalog-ws/view/100.ATSC/72D4C9DC-B1F1-45F7-8BB0-148CBA9AF247-1428690957971/adp6_22.pdf">approach taken by the US Army in teaching leadership.</a> With leadership, the Army focuses on Attributes (roughly equivalent to Traits in the chart to the left) and Competencies (roughly equivalent to Skills and Knowledge in the chart). A fundamental premise of Army leadership training is that "</span></span><span style="font-family: georgia;">most people have leadership potential and </span><span style="font-family: georgia;">can learn to be effective leaders." The same could be said, perhaps, for innovation champions.</span></p><p><span style="font-family: georgia;">While the approach is similar, there is not a one-to-one correlation between what the Army thinks makes a good leader and what is necessary for an innovation champion (See chart below and to the right).</span></p><p><span style="font-family: georgia;"></span></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhAyjDWeuJds9TedChIVQyWNUUnCB_OjjRPML280fCAuTtoOVWPwlM8uc3qfDhhImoMY9UgFW4p3M7nDPvNt4DejZIFfHyt-Pv6IwBNZSYqIhWy_GrnjhtlhLCRxwoNHI0_AFQWsrA6CENZM4AeOhxub33oLRufOiVfjRnQXg79ryujBtOmeMoy8otQFHI/s666/Screenshot%202024-01-19%20110928.png" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="666" data-original-width="650" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhAyjDWeuJds9TedChIVQyWNUUnCB_OjjRPML280fCAuTtoOVWPwlM8uc3qfDhhImoMY9UgFW4p3M7nDPvNt4DejZIFfHyt-Pv6IwBNZSYqIhWy_GrnjhtlhLCRxwoNHI0_AFQWsrA6CENZM4AeOhxub33oLRufOiVfjRnQXg79ryujBtOmeMoy8otQFHI/w390-h400/Screenshot%202024-01-19%20110928.png" width="390" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-size: x-small;">Source: ADP 6-22, ARMY LEADERSHIP <br />AND THE PROFESSION, 2019</span><br /> </td></tr></tbody></table><span style="font-family: georgia;">In short, while routine Army leadership training likely covers many of the attributes of an innovation champion, it is equally likely that there are several gaps that will need to be filled if the Army is to have the warriors it needs for the ongoing battle.</span><p></p><p><span style="font-family: georgia;">Specifically, having the minimal technical knowledge necessary to champion particular innovations jumps out as one such requirement. Many soldiers are so deeply involved in the day-to-day activities of running the Army or fighting in the country's conflicts, that they have little time for understanding arcane emerging technologies such as 3D printing, quantum computing, synthetic biology, 6 and 7G telecommunications systems, augmented reality, and others. Yet decisions, potentially costing billions of dollars, regarding the development, testing and fielding of these technologies will need to be made regularly and soon if the US Army's technical advantage is to remain.</span></p><p><span style="font-family: georgia;">Likewise, would-be innovation champions will need to learn the transformational leadership skills necessary to manage teams of experts from disparate fields. Most military officers have grown up in an environment similar to <a href="https://www.gutenberg.org/files/1232/1232-h/1232-h.htm#chap04">Machiavelli's Kingdom of the Turk</a>, which "is governed by one lord, the others are his servants; and, dividing his kingdom into sanjaks, he sends there different administrators, and shifts and changes them as he chooses." </span></p><p><span style="font-family: georgia;">This hierarchical organization with its emphasis on commanders and their intent suddenly gives way when confronted by interdisciplinary teams of experts and contractors in the diverse technical fields common to innovation activities. Here the comfortable chain of command often is replaced with something akin to Machiavelli's Kingdom of the Franks, where officers find themselves</span><span style="font-family: georgia;"> "</span><span style="font-family: georgia;">placed in the midst of an ancient body of lords, acknowledged by their own subjects, and beloved by them; they have their own prerogatives, nor can the king take these away except at his peril." Leading innovation activities, in short, requires different skills than leading at the tactical and operational levels.</span></p><p><b><span style="font-family: arial;">Where Will These Champions Come From?</span></b></p><p><span style="font-family: georgia;">Some of these Skills and Knowledge categories also typically require a certain level of experience. For example, all officers understand their organization to a certain extent, but it takes a relatively senior officer to have a feel for the entire enterprise. Likewise, officers, as they move from one assignment to another, develop useful networks, but the kind of depth and breadth necessary to lead innovation activities typically requires a deeper <a href="https://en.wikipedia.org/wiki/Rolodex">rolodex</a>. </span></p><p><span style="font-family: georgia;">This kind of officer with the experience, organizational understanding, and networks to do this kind of work are generally at the level of Lieutenant Colonel and Colonel, the O5's and O6's of the Army. </span><a href="https://www.ausa.org/articles/staff-colonels-armys-innovation-engines" style="font-family: georgia;">LTC Richard Brown put it bluntly in his essay for AUSA</a><span style="font-family: georgia;">, "</span><span style="font-family: georgia;">Staff colonels are the Army’s innovation center of gravity."</span></p><p><span style="font-family: georgia;">Officers this senior can often come with some baggage as well, however. For example, unless an officer's career has been carefully managed, it is certainly possible that some of the essential Traits of an innovation champion, such as creativity, risk-taking, or optimism, have been suppressed or even beaten out by an unforgiving system. Fortunately, </span><span style="font-family: georgia;">the right training and environment allows </span><span style="font-family: georgia;">much of this damage to be repaired. Creativity, for example, </span><a href="https://hbr.org/2012/12/reclaim-your-creative-confidence" style="font-family: georgia;">"is something you practice...not just a talent you are born with."</a></p><p><span style="font-family: georgia;">All this--filling in technical knowledge and leadership gaps while simultaneously re-energizing officers closer to the end of their careers than to the beginning--is, in military terms, a "heavy lift," a difficult, perhaps impossible, job. Making it even more challenging is the fact that there is only one realistic opportunity to do it and that is at a senior service college. In the Army's case, that is the US Army War College. </span></p><p><span style="font-family: georgia;">The War College, as it turns out, is the critical <a href="https://en.wikipedia.org/wiki/Choke_point">chokepoint</a> in the Battle of Moore's Chasm.</span></p><p><span style="font-family: georgia;">The 10 month stint at the War College comprises the last in-depth, formal military education most senior officers will receive. After this, they typically move on to senior staff positions or take command of brigade sized units. A relatively few of these graduates will go on to become generals and most will complete only one or two more assignments before retiring. If officers don't get it at the War College, they are unlikely to get this kind of specialized education and training once they get back to the field.</span></p><p><span style="font-family: georgia;">Fortunately, I think the War College understands this generally and I am involved in two specific activities that are deliberately designed to address these challenges, the Futures Seminar and the Futures Lab.</span></p><p><span style="font-family: georgia;">The Futures Seminar use real questions from real senior defense officials to jumpstart a year long project designed, typically, to not only delve deep into the world of technology as well as more generalized "futures-thinking" but also to gain practical skills in managing highly diverse teams of experts as the students seek to integrate their thinking in pursuit of the best possible answer to their sponsor's question.</span></p><p><span style="font-family: georgia;">The Futures Lab also seeks to fill the tech knowledge gap but in a more hands-on way, allowing students an opportunity to spend as much or as little time as they want learning the ins-and-outs of technologies such as 3D printing, drones, virtual reality, and robots. With </span><span style="font-family: georgia;">a wide variety of technologies and</span><span style="font-family: georgia;"> expert assistance available, the Lab creates an environment designed to re-awaken creativity, enthusiasm, and risk-taking.</span></p><p><b><span style="font-family: arial;">Who will win?</span></b></p><p><span style="font-family: georgia;">Andrew Krepinevich, a military strategist and award winning author, in his recent book, <a href="https://www.amazon.com/Origins-Victory-Disruptive-Innovation-Determines-ebook/dp/B0BTDCMFV4/ref=tmm_kin_swatch_0?_encoding=UTF8&dib_tag=se&dib=eyJ2IjoiMSJ9.C70UHxxxLsOlmp6JfpU9r86PvI5l6JmxgLdxTLMHbfe9KcsMW21lJChWijnfZEMhz2pcdZahyWbpUfD6UYbAnShpFt2BHv9WqgxWwnK8Ewhxl5gcCDD4B8QS75UYOf21th8f5UfRyiHWy-d6DJdkWk260aCcw1QjPvegiOVxiJ8wFz6lSAh4wm-_d0udFf5rB9odgw_G5ZKJeqN7iXILTzcHV4xE_K2sPuWDs9iKyZM.FMFIwWNryut5zle80Hc3frOk6reXGUwGJfoWumMYc_I&qid=1705692029&sr=8-1">The Origins of Victory: How Disruptive Military Innovation Determines the Fates of Great Powers</a></span><span style="font-family: georgia;">, states:</span></p><p><span style="font-family: georgia;"></span></p><blockquote><span style="font-family: georgia;">"Viewed from a lagging competitor’s perspective, failing to keep pace in exploiting the potential of an emerging military revolution risks operating at a severe disadvantage. Consequently, the common challenge for all major-power militaries in a period of military revolution is to be the first to identify its salient characteristics and exploit its potential. Silver medals are not awarded to those who come in second."</span></blockquote><p></p><p><span style="font-family: georgia;">If the side that innovates best, that not only employs emerging technologies</span><span style="font-family: georgia;"> but also combines them into a system where the whole can be more than the sum of its parts, is the side that wins, then the crucial battle, </span><a href="https://www.army.mil/article/269755/forces_command_leaders_focus_on_warfighting_army_readiness_people_training_and_modernization" style="font-family: georgia;">the first fight</a><span style="font-family: georgia;">, is the Battle of Moore's Chasm, and the US Army will need trained and ready innovation champions</span><span style="font-family: georgia;"> to win it.</span></p><p><span style="font-family: georgia;"><i><span>Note: </span><span style="background-color: white; border: 0px; margin: 0px; outline: 0px; padding: 0px; text-shadow: none;">The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Army, Department of Defense, or the U.S. Government.</span><span style="background-color: white;"> </span></i></span></p>Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com2tag:blogger.com,1999:blog-2569772432953120875.post-51645261960092207452023-12-12T09:00:00.001-05:002023-12-12T09:00:00.138-05:00Forget Artificial Intelligence. What We Need Is Artificial Wisdom<p><span style="font-family: georgia;">I have been thinking a lot about what it means to be "wise" in the 21st Century.</span></p><p><span style="font-family: georgia;">Wisdom, for many people, is something that you accrue over a lifetime. "Wisdom is the daughter of experience" insisted Leonardo Da Vinci. Moreover, the sense that experience and wisdom are linked seems universal. There's an African proverb, for example, of which I am particularly fond that claims, "When an old person dies, a library burns to the ground." </span></p><p><span style="font-family: georgia;">Not all old people are wise, of course. Experience sometimes erodes a person, like the steady drip-drip of water on a stone, such that, in the end, there is nothing but a damn fool left. We have long had sayings about that as well.</span></p><p><span style="font-family: georgia;">Experience, then, probably isn't the only way to become wise and may not even be a necessary pre-condition for wisdom. How then to define it?</span></p><p><span style="clear: right; color: black; float: right; margin-bottom: 1em; margin-left: 1em;"><span style="font-family: georgia;"><img border="0" data-original-height="260" data-original-width="563" height="185" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHUtZwZlemdhFuUWv-fTH3u3C2LXVO20DSFyIqYlbLJYuamAFRuxcFRab6wByhsVbt0wfwAvCVvcSjN2OuaSJIzJI92525_hywEV55T4VThAN5AXxinMoYlzv8LU4lCS0YnqeeyeMvgUGjzh4Vlhy4vikNDAG1X4LgQFjsGrd-E9hf7pEqtNkf0-SR/w400-h185/Screenshot%202022-04-18%20140529.png" width="400" /></span></span><span style="font-family: georgia;">One thing I do know is that people still want wisdom, at least in their leaders. I know this because I asked my contacts on LinkedIn about it. 100 responses later virtually everyone said they would rather have a wise leader than an intelligent one. </span></p><p><span style="font-family: georgia;">These results suggest something else as well: That people know wisdom when they see it. In other words, the understanding of what wisdom is or isn't is not something that is taught but rather something that is learned implicitly, by watching and evaluating the actions of ourselves and others.</span></p><p><span style="font-family: georgia;"></span></p><p><span style="font-family: georgia;">Nowhere is this more obvious than in the non-technical critiques of artificial intelligence (AI). All of these authors seem nervous, even frightened, about the elements of humanity that are missing in the flawed but powerful versions of AI that have recently been released upon the world. The AIs, in their view, seem to lack <a href="https://www.cambridge.org/core/journals/journal-of-management-and-organization/article/abs/understanding-the-elements-and-outcomes-of-executive-wisdom-a-strategic-approach/C2501B5E9C907DAF85EFE2EBE627A20A">moral maturity, reflective strategic decision-making, and an ability to manage uncertainty</a> and no one, least of all the authors of these critiques, wants AIs without these attributes to be making decisions that might change, well, everything. This angst seems to be a shorthand for a simpler concept, however: We want these AIs to not just be intelligent, but to be wise.</span></p><p><span style="font-family: georgia;">For me, then, a good bit of the conversation about AI safety, <a href="https://en.wikipedia.org/wiki/AI_alignment">AI alignment</a>, and <a href="https://en.wikipedia.org/wiki/Effective_altruism">"effective altruism"</a> comes down to how to define wisdom. I'm not a good enough philosopher (or theologian) to have the answer to this but I do have some hypotheses.</span></p><p><span style="font-family: georgia;">First, when I try to visualize a very intelligent person who has only average wisdom, I imagine a person who knows a large number of things. Their knowledge is encyclopedic but their ability to pull things together is limited. They lack <a href="https://www.darpa.mil/program/machine-common-sense">common sense</a>. In contrast, when I try to imagine someone who is very wise but of just average intelligence, I imagine someone who knows considerably less but can see the connections between things better and, as a result, <a href="https://suntzusaid.com/book/13/4">can envision second and third order consequences</a>. The image below visualizes how I see this difference:</span></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMIUTnt4yjSf4ytN7mNXRDmAtRe1F2D57AirGX95E1zgu53tJCWUj_PYDd6f3ibNK94Yti7BcNViZ6LWfX6jsxNKo05BXPLBZMjnwMASonAFtP1THvq75iNJtoklFyXB807DZ6MRYdRwvvEgRC7poZjRLX8IGZ0mUP6yhGH1lLEbdvzJ7xgKtsfSn-dBk/s960/Generative%20AI%20(e.g.%20ChatGPT,%20etc.)%20in%20the%20PME%20Classroom_%20%20Lessons%20Learned.png" style="margin-left: 1em; margin-right: 1em;"><span style="font-family: georgia;"><img border="0" data-original-height="540" data-original-width="960" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMIUTnt4yjSf4ytN7mNXRDmAtRe1F2D57AirGX95E1zgu53tJCWUj_PYDd6f3ibNK94Yti7BcNViZ6LWfX6jsxNKo05BXPLBZMjnwMASonAFtP1THvq75iNJtoklFyXB807DZ6MRYdRwvvEgRC7poZjRLX8IGZ0mUP6yhGH1lLEbdvzJ7xgKtsfSn-dBk/w640-h360/Generative%20AI%20(e.g.%20ChatGPT,%20etc.)%20in%20the%20PME%20Classroom_%20%20Lessons%20Learned.png" width="640" /></span></a></div><span style="font-family: georgia;">This visualization, in turn, suggests where we might find the tools to better define artificial wisdom, in <a href="https://en.wikipedia.org/wiki/Network_science">network research</a>, <a href="https://en.wikipedia.org/wiki/Graph_theory">graph theory</a>, and <a href="https://en.wikipedia.org/wiki/Computational_social_science">computational social science</a>.</span><p></p><p><span style="font-family: georgia;">I also think there are some hints lurking in biology, psychology, and neuroscience. Specifically in the study of cognitive biases. Over the last 30 years or so, in many disciplines cognitive biases have come to be seen as "bad things"--predictable human failures in logical reasoning. Recently, though, <a href="https://www.sscnet.ucla.edu/comm/haselton/unify_uploads/files/evolution%20of%20cognitive%20bias%20for%20Buss%20handbook%202E%20final.pdf">some of the literature has started to question this interpretation.</a> If cognitive biases are so bad, if they keep us from making rational decisions, then why aren't we all dead? Why haven't evolutionary pressures weeded out the illogical? </span></p><p><span style="font-family: georgia;">If you accept the premise that cognitive biases evolved in humans because they were useful (even if only on the savannahs of east Africa), then it sort of begs the question, 'What did they help us do?"</span></p><p><span style="font-family: georgia;">My favorite attempt at answering this question is the <a href="https://en.wikipedia.org/wiki/Cognitive_bias#/media/File:Cognitive_Bias_Codex_-_180+_biases,_designed_by_John_Manoogian_III_(jm3).jpg">Cognitive Bias Codex</a> (See image below).</span></p><p></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEglT-ro49Hp9OgTIzvjNObzv8uf3VzZOFzypzoVQ_s4hG6FdRknlPatU82jpadHvZg2dD8G8Krc7Wrq6gy8tM3pNpaMGC7kQrE2GT7CASaJKyl5BKQapuVY5-LhZxXGfnWtb_KD3sA3IMG5VlHg_SDOdny40sHHWpYQxDvBNS6GY6SxFJPcb2aey9VHBvM/s1351/Cognitive_Bias_Codex_-_180+_biases,_designed_by_John_Manoogian_III_(jm3).jpg" style="margin-left: auto; margin-right: auto;"><span style="font-family: georgia;"><img border="0" data-original-height="1080" data-original-width="1351" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEglT-ro49Hp9OgTIzvjNObzv8uf3VzZOFzypzoVQ_s4hG6FdRknlPatU82jpadHvZg2dD8G8Krc7Wrq6gy8tM3pNpaMGC7kQrE2GT7CASaJKyl5BKQapuVY5-LhZxXGfnWtb_KD3sA3IMG5VlHg_SDOdny40sHHWpYQxDvBNS6GY6SxFJPcb2aey9VHBvM/w640-h512/Cognitive_Bias_Codex_-_180+_biases,_designed_by_John_Manoogian_III_(jm3).jpg" width="640" /></span></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-family: georgia;">By Jm3 - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=51528798</span></td></tr></tbody></table><span style="font-family: georgia;"><br /><span>Here the authors grouped all of the known cognitive biases into four major categories sorted by what they <i>helped</i> us do:</span></span><p></p><p></p><ul style="text-align: left;"><li><span style="font-family: georgia;">What should we remember</span></li><li><span style="font-family: georgia;">What to do when we have too much information</span></li><li><span style="font-family: georgia;">What to do when there is not enough meaning</span></li><li><span style="font-family: georgia;">What to do when we need to act fast</span></li></ul><p></p><p><span style="font-family: georgia;">Interestingly, all of these areas are new areas of research in the AI community (For examples see: <a href="https://fis.uni-bamberg.de/server/api/core/bitstreams/1179f7c8-3e2c-49fd-86ab-d98e8fec55c0/content">Intentional Forgetting in Artificial Intelligence Systems: Perspectives and Challenges</a> and <a href="https://link.springer.com/article/10.1007/s13218-018-0566-4">Intentional Forgetting in Distributed Artificial Intelligence</a>). </span></p><p><span style="font-family: georgia;">Even the need to act fast, which seems like something at which AI excels, becomes more about wisdom than intelligence when decomposed. Consider some of the Codex's sub-categories within the need to act fast:</span></p><p></p><ul style="text-align: left;"><li><span style="font-family: georgia;">We favor simple-looking options and complete information over complex, ambiguous options.</span></li><li><span style="font-family: georgia;">To avoid mistakes, we aim to preserve autonomy and group status, and avoid irreversible decisions.</span></li><li><span style="font-family: georgia;">To get things done, we tend to complete things we've invested time and energy in.</span></li><li><span style="font-family: georgia;">To stay focused, we favor the immediate, relatable thing in front of us.</span></li><li><span style="font-family: georgia;"><tspan id="trsvg162" x="0" y="8.0">To act, we must be confident we </tspan><tspan dy="22.5" id="trsvg163" x="0">can make an impact and feel what </tspan><tspan dy="22.5" id="trsvg164" x="0">we do is important.</tspan></span></li></ul><p></p><p><span style="font-family: georgia;">All of these seem to have more to do with wisdom than intelligence. Furthermore, true wisdom would be most evident in knowing when to apply these rules of thumb and when to engage more deliberative <a href="https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow">System 2 skills</a>.</span></p><p><span style="font-family: georgia;">As I said, these are just hypotheses, just guesses, based on how I define wisdom. Despite having thought about it for quite some time, I am virtually certain that I still don't have a good handle on it.</span></p><p><span style="font-family: georgia;">But that is not to say that I don't think there is something there. Even if only used to help communicate to non-experts the current state of AI (e.g. "Our AIs exhibit some elements of general intelligence but very little wisdom"), it can, perhaps, help describe the state of the art more clearly while also driving research more directly. </span></p><p><span style="font-family: georgia;">In this regard, it is also worth noting that modern AI dates back to at least the 1950's, and that it has gone through two full blown <a href="https://en.wikipedia.org/wiki/AI_winter">AI "winters"</a> where most scientists and funders thought that AI would never go anywhere. In other words, it has taken many years and been a bit of a roller coaster ride to get to where we are today. It would seem unrealistic to expect artificial wisdom to follow a different path but it is, I would argue, a path worth taking.</span></p><p><span style="font-family: georgia;"><i><span>Note: </span><span style="background-color: white; border: 0px; margin: 0px; outline: 0px; padding: 0px; text-shadow: none !important;">The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Army, Department of Defense, or the U.S. Government.</span><span style="background-color: white;"> </span></i></span></p>Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com0tag:blogger.com,1999:blog-2569772432953120875.post-18945138384560044602023-10-30T09:30:00.008-04:002023-10-30T09:30:00.155-04:00The Catch 22 Of Generative AI<p></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqzeRqpCVgDWy1JaOrAyuCoabJa1Hy9n3K5C7xy3AzTIyEMGThUr62h6s4MDZS4ppmBo3aIDb03zjDmfRvWKwpnBOjsChayecTUIRmxxioxjJxOmLDe68j6hH6u5jTVLdBMMY_iC5VN6cIvaaAGtNo2HUAMWgyJRr6yjFHibgus4f_w03N6U_evDLcyvs/s1024/kwheaton_a_true_3d_chart_with_an_x_y_and_a_z_axis_done_in_the_s_5ede3c14-1d26-4732-8310-a5368a4019f9.png" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1024" data-original-width="1024" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqzeRqpCVgDWy1JaOrAyuCoabJa1Hy9n3K5C7xy3AzTIyEMGThUr62h6s4MDZS4ppmBo3aIDb03zjDmfRvWKwpnBOjsChayecTUIRmxxioxjJxOmLDe68j6hH6u5jTVLdBMMY_iC5VN6cIvaaAGtNo2HUAMWgyJRr6yjFHibgus4f_w03N6U_evDLcyvs/w400-h400/kwheaton_a_true_3d_chart_with_an_x_y_and_a_z_axis_done_in_the_s_5ede3c14-1d26-4732-8310-a5368a4019f9.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><i>A true 3D chart done is the style of <br />Leonardo Da Vinci (Courtesy MidJourney)</i></td></tr></tbody></table><span style="font-family: georgia;">I have always wanted to be able to easily build true 3D charts. Not one of those imitation ones that just insert a drop shadow behind a 2D column and call it "3D," mind you. I am talking about a true 3D chart with an X, Y and Z axis. While I am certain that there are proprietary software packages that do this kind of thing for you, I'm cheap and the free software is either clunky or buggy, and I don't have time for either.</span><p></p><p><span style="font-family: georgia;">I was excited, then, when I recently <a href="https://youtu.be/wU1cQcQFimw?si=yis_OcXaFu338Zef">watched a video</a> that claimed that ChatGPT could write Python scripts for <a href="https://www.blender.org/">Blender</a>, the popular open source animation and 3D rendering tool. I barely know how to use Blender and do not code in Python at all, but am always happy to experiment with ChatGPT.</span></p><p><span style="font-family: georgia;">Armed with very little knowledge and a lot of hope, I opened up ChatGPT and asked it to provide a Python script for Blender that would generate a 3D chart with different colored dots at various points in the 3D space. I hit enter and was immediately rewarded with what looked like 50 or so lines of code doing precisely what I asked!</span></p><p><span style="font-family: georgia;">I cut and pasted the code into Blender, hit run, and...I got an error message. So, I copied the error message and pasted it into ChatGPT and asked it to fix the code. The machine apologized</span><span style="font-family: georgia;">(!)</span><span style="font-family: georgia;"> to me for making the mistake and produced new code that it claimed would fix the issue. </span></p><p><span style="font-family: georgia;">It didn't.</span></p><p><span style="font-family: georgia;">I tried again and again. Six times I went back to ChatGPT, each time with slightly different error messages from Blender. Each time, after the "correction," the program failed to run and I received a new error message in return.</span></p><p><span style="font-family: georgia;">Now, I said I didn't know how to code in Python, but that doesn't mean I can't code. Looking over the error messages, it was obvious to me that the problem was almost certainly something simple, something any Python coder would be able to figure out, correct, and implement. Such a coder would have saved a vast amount of time as, even when you know what you are doing, 50 lines of code takes a good bit of time to fat-finger. </span></p><p><span style="font-family: georgia;">In other words, for generative AI to be helpful to me, I would need to know Python, but the reason I went to a generative AI in the first place was because I didn't know Python! </span></p><p><span style="font-family: georgia;">And therein lies the <a href="https://en.wikipedia.org/wiki/Catch-22_(logic)">Catch-22</a> of generative AI. </span></p><p><span style="font-family: georgia;">I have seen this same effect in a variety of other situations. I asked another large language model, Anthropic's Claude, to write a draft of a safety SOP. It generated a draft very quickly and with surprising accuracy. There were, however, a number of things that needed to be fixed. Having written my fair share of safety SOPs back in the day, I was able to quickly make the adjustments. It saved me a ton of time. Without understanding what a good safety SOP looked like to begin with, however, the safety SOP created by generative AI risked being, well, unsafe.</span></p><p><span style="font-family: georgia;">At one level, this sounds a lot like some of my previous findings on generative AI such as <a href="https://sourcesandmethods.blogspot.com/2023/07/generative-ai-is-like.html">"Generative AI is a mindnumbingly fast but incredibly average staff officer"</a> or <a href="https://sourcesandmethods.blogspot.com/2023/10/generative-ai-is-better-at-form-than.html">"Generative AI is better at form than content."</a> And it is.</span></p><p><span style="font-family: georgia;">At another level, however, it speaks to the need for an education system that is both going to keep up with advancements in generative AI while simultaneously maintaining pre-generative AI standards. The only way, at least for now, to use generative AI safely will be to know more than the AI about the AI's outputs--to know enough to spot the errors. The only way, in turn, to know more than generative AI is to learn it the old-fashioned way--grind through the material on your own until you are comfortable that you understand it. Ironically, AI may be able to speed up the grind, but the learning is still on you. </span></p><p><span style="font-family: georgia;">At another, deeper, level, it is more disturbing. I worry that people will ask generative AI about things that they think they know but they don't. Blender acted as a check on both my ignorance and the AI's errors in the first example. My own experience with safety SOPs acted as a check on the AI in the second example. What about areas such as political science, security studies, and military strategy where subjectivity reigns? What if there aren't any checks on the answers generative AI produces? Dumb questions will lead to incorrect answers which will lead to dumber questions and more incorrect answers--a sort of an AI powered, <a href="https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect">Dunning-Kruger</a> death spiral. </span></p><p><span style="font-family: georgia;">This mirrors, of course, one of the many concerns of AI experts. I also know that there are many good people working hard to ensure that these kinds of scenarios rarely if ever play themselves out. That said, I am reminded of an old Mark Twain saying that was a near perfect forecast of the problems with social media: <span style="background-color: white;"><span style="color: #181818;">“A lie can travel halfway around the world while the truth is putting on its shoes.” Perhaps that should be updated for the modern age: "An AI energized chain reaction of stupid can destroy the world while the prudent are still slipping on their crocs." </span></span></span></p><p><span style="font-family: georgia;"><span style="background-color: white;"><span style="color: #181818;">Not as catchy, I suppose, but equally prescient?</span></span></span></p>Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com0tag:blogger.com,1999:blog-2569772432953120875.post-44795312305912114002023-10-03T09:30:00.003-04:002023-10-03T13:02:19.817-04:00Generative AI Is Better At Form Than Content<p><span style="font-family: georgia;"></span></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfeoCVeIEeqYSgvHEetqDEzB1aU3-cvMs66C-3sQ8I_7-TL_w6n7-LkZyJ3EnD0QZE8-lmCuGIzIsAvbCjhRXIOFqjw3oBzMnkHqbbLnx41522S7Sfp3MlrtSrtZuDpaj9Nz-Kt0zduJdjCLs9tKAJtLb2l3J4qe6vfqoXm5Ht9SsNYbK9c1LLjqXDasU/s519/Prudens%20Futuri.jpg" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="305" data-original-width="519" height="235" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfeoCVeIEeqYSgvHEetqDEzB1aU3-cvMs66C-3sQ8I_7-TL_w6n7-LkZyJ3EnD0QZE8-lmCuGIzIsAvbCjhRXIOFqjw3oBzMnkHqbbLnx41522S7Sfp3MlrtSrtZuDpaj9Nz-Kt0zduJdjCLs9tKAJtLb2l3J4qe6vfqoXm5Ht9SsNYbK9c1LLjqXDasU/w400-h235/Prudens%20Futuri.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><i>Prudens Futuri prominently displayed in <br />the US Army War College's new academic building</i><br /><br /></td></tr></tbody></table><span style="font-family: georgia;">All communications have at least two aspects, form and content. Content is what we want to say. It could be about politics, or philosophy, or here at the War College, rather predictably, about war.</span><p></p><p><span style="font-family: georgia;">Form is different. The medium through which we express ideas impacts how we communicate those ideas. For instance, the same core concepts about politics, philosophy, or war can be conveyed very differently depending on whether they are presented through a 20 minute PowerPoint presentation or a 2,000 word paper. <a href="https://en.wikipedia.org/wiki/The_medium_is_the_message">The form we choose defines how we communicate the message</a>.</span></p><p><span style="font-family: georgia;">Let me give you an example. My current favorite generative AI is <a href="https://www.anthropic.com/">Anthropic's Claude</a>. I asked it to explain the meaning of the US Army War College's motto, "Prudens Futuri," to the incoming class of students. Here is what it came back with:</span></p><p><span style="font-family: georgia;"></span></p><blockquote><span style="font-family: georgia;"><i>"The Latin phrase "prudens futuri" translates to "foreseeing the future" or "provident of the future." It refers to the wisdom and foresight required of effective leaders. Prudent leaders do not simply react to current circumstances - they have a vision for the future and proactively position their forces to capitalize on anticipated events. As Sun Tzu advised, foreknowledge enables leaders to seize opportunities and gain advantage. Prudens futuri requires gathering intelligence, conducting analysis, forecasting threats, and planning for contingencies. Leaders who excel at prudens futuri can execute bold, unconventional strategies to defeat enemies who rely on reactionary thinking. Prudens futuri epitomizes the innovative, forward-looking mindset the Army must adopt to maintain strategic dominance."</i></span></blockquote><p><span style="font-family: georgia;">Not bad. I also asked it to explain the same thing to a 13 year old:</span></p><p></p><blockquote><span style="font-family: georgia;"><i>"'Prudens futuri' is a Latin phrase that means "foreseeing the future." It's about being wise enough to predict what might happen next. Great leaders don't just respond to what's in front of them - they think ahead. Imagine you're a general planning to attack an enemy. A wise general would send scouts to spy first and learn the enemy's weaknesses. Then the general can make a smart battle plan to sneak around and surprise them from behind. The Latin phrase teaches that real leaders don't just react - they use their brains to see the future and stay one step ahead. That's what "prudens futuri," or "foreseeing the future," is all about!"</i></span></blockquote><p></p><p></p><p><span style="font-family: georgia;">Also not bad--particularly given the change in audience. </span></p><p><span style="font-family: georgia;">It is even possible to ask the same of a text-to-image AI like <a href="https://www.midjourney.com/">Midjourney</a>, used to create the image below from the same prompt:</span></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTfHXSMWDOSP4JUQ2BrC4n7d8l0063rsOHLXwHZsTJcmhXVaZ-8Sl5-dxtjmcLRLOm84vKD-wc5csYiLYOvH-7EhxJfXPJ_UsPfpPi8EiR05Wr43I_4aoukZOnlh0ogmAbYKVNwSDnZhzNrUfLOACC13OXfs8S93cP5xi_Lm4-Sn5fzx9-b7uFwmxhKeU/s1024/kwheaton_the_latin_phrase_prudens_futuri_as_a_picture_used_to_e_d1f19bc0-efb7-40d2-bfb1-e50ec79dae81.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1024" data-original-width="1024" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTfHXSMWDOSP4JUQ2BrC4n7d8l0063rsOHLXwHZsTJcmhXVaZ-8Sl5-dxtjmcLRLOm84vKD-wc5csYiLYOvH-7EhxJfXPJ_UsPfpPi8EiR05Wr43I_4aoukZOnlh0ogmAbYKVNwSDnZhzNrUfLOACC13OXfs8S93cP5xi_Lm4-Sn5fzx9-b7uFwmxhKeU/w640-h640/kwheaton_the_latin_phrase_prudens_futuri_as_a_picture_used_to_e_d1f19bc0-efb7-40d2-bfb1-e50ec79dae81.png" width="640" /></a></div><br /><p><span style="font-family: georgia;">In this image, I am particularly taken with the way the AI combined historical references to foresight, such as the <a href="https://en.wikipedia.org/wiki/Pythia">Oracle of Delphi</a>, with more futuristic concepts like the (apparent) <a href="https://en.wikipedia.org/wiki/Space_elevator">space elevator</a> in the background. In fact, I find asking text-to-image generators to imagine some futuristic scene to be a very practical way to fill in gaps in futures thinking. I inevitably see something that makes sense in the image that was not accounted for in the text (kind of like <a href="https://en.wikipedia.org/wiki/The_Spandrels_of_San_Marco_and_the_Panglossian_Paradigm">The Spandrels of San Marco and the Panglossian Paradigm</a>).</span></p><p><span style="font-family: georgia;">My point? I have yet to have a generative AI create something that did not need tweaking, at the least, on the content side. I have been really happy, however, with generative AI's ability to master particular forms. </span></p><p><span style="font-family: georgia;">This is one of the reasons, I think, I have quite recently become a bit uncomfortable with policies that talk about citing a generative AI as if it were a source. It <i>is</i>, I suppose...but it seems less of a source than Wikipedia, and, while I love Wikipedia and believe it is one of the great wonders of the modern world, I would not cite Wikipedia for anything other than background. I require my students, for example, to find a reputable source to validate anything that a generative AI might come up with when making an estimate. </span><span style="font-family: georgia;">And, if you are going to make a student find a reputable</span><span style="font-family: georgia;"> source anyway, why would they need the generative AI at all? The answer, of course, is for the form. </span></p><p><span style="font-family: georgia;">This may not be true forever. Generative AI is getting better at a brisk pace. There may come a day when generative AI is looked upon as an authority, equal to peer-reviewed papers. </span><span style="font-family: georgia;">Until that time, we should still appreciate its talents for helping to craft the message. For now, generative AI is an unparalleled writing partner, not an independent thinker. By acknowledging its current limits alongside its awesome potential, we grant generative AI its proper place: revolutionizing <i>how</i> we communicate knowledge, while established methods still reign over <i>what</i> we know.</span></p>Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com0tag:blogger.com,1999:blog-2569772432953120875.post-5190185543810511602023-08-16T09:30:00.005-04:002023-08-16T09:30:00.171-04:00Answers For Pennies, Insights For Dollars: Generative AI And The Question Economy<p><span style="font-family: georgia;">No one seems to know exactly where the boom in Generative AIs (like <a href="https://chat.openai.com/" target="_blank">ChatGPT</a> and <a href="https://www.anthropic.com/index/claude-2">Claude</a>) will lead us, but one thing is for certain: These tools are rapidly driving down the cost of getting a good (or, at least, good enough) answer very quickly. Moreover, they are likely to continue to do so for quite some time. </span></p><p><span style="font-family: georgia;"></span></p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhaUVVQpLwkBuZjXyNE8-b8DjL2tiXDBExxZWe94yrQwgNnOyXYs35khUxAZb3wkvA0ASPviZT_CyvhEMBdLvsTuBnCBX2LUK7oAjI7FdssgjJvhOd7TlvihL_EHanymuQTFNEUXBtkiWB8kqn9ms7i3D_bYxwMZDguAtKpTPipvJNC6ZXtaHRWA1ZGPm8/s960/Generative%20AI%20%20cost%20of%20answers.png" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="540" data-original-width="960" height="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhaUVVQpLwkBuZjXyNE8-b8DjL2tiXDBExxZWe94yrQwgNnOyXYs35khUxAZb3wkvA0ASPviZT_CyvhEMBdLvsTuBnCBX2LUK7oAjI7FdssgjJvhOd7TlvihL_EHanymuQTFNEUXBtkiWB8kqn9ms7i3D_bYxwMZDguAtKpTPipvJNC6ZXtaHRWA1ZGPm8/w400-h225/Generative%20AI%20%20cost%20of%20answers.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><i>The data is notional <br />but the trend is unquestionable, I think.</i></td></tr></tbody></table><span style="font-family: georgia;"><br />To be honest, this has been a trend since at least the mid-1800's with the <a href="https://en.wikipedia.org/wiki/Public_library">widespread establishment of public libraries in the US and UK</a>. Since then, improvements in cataloging, the professionalization of the workforce, and technology, among other things, worked to drive down the cost of getting a good answer (See chart to the right).</span><p></p><p><span style="font-family: georgia;">The quest for a less expensive but still good answer accelerated, of course, with the <a href="https://en.wikipedia.org/wiki/Internet#History">introduction of the World Wide Web in the mid-1990's</a>, driving down the cost of answering even tough questions. While misinformation, disinformation, and the unspeakable horror that social media has become will continue to lead many people astray, savvy users are better able to find consistently good answers to harder and more obscure questions than ever before. </span></p><p><span style="font-family: georgia;">If the internet accelerated this historical trend of driving down the cost of getting a good answer, the roll-out of generative AI to the public in late 2022 tied a rocket to its backside and pushed it off a cliff. <a href="https://www.pcmag.com/news/yes-machines-make-mistakes-the-10-biggest-flaws-in-generative-ai">Hallucinations and bias</a> to the side, the simple truth is that generative AI is, more often than not, able to give pretty good answers to an awful lot of questions and it is free or cheap to use. </span></p><p><span style="font-family: georgia;">How good is it? Check out the chart below (<a href="https://www.visualcapitalist.com/how-smart-is-chatgpt/">Courtesy Visual Capitalist</a>). GPT-4, OpenAI's best, publicly available, large language model, blows away most standardized tests. </span></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdrey7zu-ODnHFnzxwFg9wF3xbxUtFKF0-EA8Nf3L9U0DXDJi-9pvQoKxV95kIVkOPhp89TLehJT_fsvmYVUhjCbIwppbqqD3wlqYLtKi5JT7mUTkVnyQfMITOl-q1lkZHPfYf5B8YwMztOi0zXblxw0qz4nZpt2SNj_4L34HNBo8TjzKQsrmpQJMzF6A/s1469/ChatGPT-Exam-Scores_MAIN.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1469" data-original-width="1200" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdrey7zu-ODnHFnzxwFg9wF3xbxUtFKF0-EA8Nf3L9U0DXDJi-9pvQoKxV95kIVkOPhp89TLehJT_fsvmYVUhjCbIwppbqqD3wlqYLtKi5JT7mUTkVnyQfMITOl-q1lkZHPfYf5B8YwMztOi0zXblxw0qz4nZpt2SNj_4L34HNBo8TjzKQsrmpQJMzF6A/w522-h640/ChatGPT-Exam-Scores_MAIN.jpg" width="522" /></a></div><div><br /></div><span style="font-family: georgia;">It is important to note that this chart was made in April, 2023 and represent results from GPT-4. <a href="https://www.metaculus.com/questions/15462/gpt-5-announcement/">OpenAI is working on GPT 5</a> and five months in this field is like a dozen years in any other (Truly. I have been watching tech evolve for 50 years. Nothing in my lifetime has ever improved as quickly as generative AIs have). Eventually, the forces driving these improvements will reach a point of diminishing returns and growth will slow down and maybe even flatline, but that is not the trajectory today.</span><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">All this sort of begs a question, though: If answers are getting better, cheaper, and more widely available at an accelerating rate, what's left? </span> <span style="font-family: georgia;">In other words, if no one needs to pay for my answers anymore, what can I offer? How can I make a living? Where is the value-added? </span><span style="font-family: georgia;">This is precisely the sort of thinking that led Goldman-Sachs to <a href="https://www.bbc.com/news/technology-65102150">predict the loss of 300 million jobs worldwide due to AI</a>. </span></div><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">My take on it is a little different. I think that as the cost of a good answer goes down, the value of a good question goes up. </span></div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLGf6jxyWqhXoKM9oF5Stl88OqItQhwZGplyrMq5j55j6OFzRafHCnGyyv4IOcQflMpoLUZwLUnkJOd5Flaysjw96_Ol8KMwVfB5qq8tRlXAmMpZ25ljUSMNvoz8v4Z_iedQ5Zff8Bmrcqk_lZP3jQ0qa-EnHJsiDEOfDl0swvzpeEZuf1CROaOGR8kTM/s960/Generative%20AI%20%20and%20the%20USAWC.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="540" data-original-width="960" height="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLGf6jxyWqhXoKM9oF5Stl88OqItQhwZGplyrMq5j55j6OFzRafHCnGyyv4IOcQflMpoLUZwLUnkJOd5Flaysjw96_Ol8KMwVfB5qq8tRlXAmMpZ25ljUSMNvoz8v4Z_iedQ5Zff8Bmrcqk_lZP3jQ0qa-EnHJsiDEOfDl0swvzpeEZuf1CROaOGR8kTM/w640-h360/Generative%20AI%20%20and%20the%20USAWC.png" width="640" /></a></div><span style="font-family: georgia;"><i>In short, the winners in the coming AI wars are going to be the ones who can ask the best questions at the most opportune times. </i></span><br /><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">There is evidence, in fact, that this is already becoming the case. Go to Google and look for jobs for "prompt engineers." This term barely existed a year ago. Today, it is one of </span><a href="https://www.linkedin.com/pulse/prompt-engineering-future-jobs-age-chatgpt-srini-gundelli/" style="font-family: georgia;">the hottest growing fields in AI</a><span style="font-family: georgia;">. Prompts are just a fancy name for the questions that we ask of generative AI, and a prompt engineer is someone who knows the right questions to ask to get the best possible answers. There is even a marketplace for these "good questions" called </span><a href="https://promptbase.com/" style="font-family: georgia;">Promptbase</a><span style="font-family: georgia;"> where you can, for aa small fee, buy a customizable prompt from someone who has already done the hard work of optimizing the question for you.</span></div><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">Today, earning the qualifications to become a prompt engineer is a combination of on-the-job training and art. There are some approaches, some magical combination of words, phrases, and techniques, that can be used to get the damn machines to do what you want. Beyond that, though, much of what works seems to have been discovered by power users who are just messing around with the various generative AIs available for public use.</span></div><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">None of this is a bad thing, of course. The list of discoveries that have come about from people just messing around or mashing two things together that have not been messed with/mashed together before is both long and honorable. At some point, though, we are going to have to do more than that. At some point, we are going to have to start teaching people how to ask better questions of AI.</span></div><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">The idea that asking the right question is not only smart but essential is a old one:</span></div><div><span style="font-family: georgia;"><br /></span></div><blockquote style="border: none; margin: 0px 0px 0px 40px; padding: 0px; text-align: left;"><div style="text-align: left;"><span style="font-family: georgia, "times new roman", serif; font-size: 16px;"><i>“A prudent question is one-half of wisdom.”</i> – <a href="https://en.wikipedia.org/wiki/Francis_Bacon" style="color: #b4445c;">Francis Bacon</a></span></div><div style="text-align: left;"><span style="font-family: georgia, "times new roman", serif; font-size: 16px;"><i>"The art of proposing a question must be held of higher value than solving it.”</i> – <a href="https://en.wikipedia.org/wiki/Georg_Cantor" style="color: #b4445c;">Georg Cantor</a></span></div><div style="text-align: left;"><span style="font-family: georgia, "times new roman", serif; font-size: 16px;"><i>“If you do not know how to ask the right question, you discover nothing.”</i> – <a href="https://en.wikipedia.org/wiki/W._Edwards_Deming" style="color: #b4445c;">W. Edwards Deming</a></span></div></blockquote><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">And we often think that at least one purpose of education, certainly of higher education, is to teach students how to think critically; how, in essence to ask better questions. </span></div><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">But is that really true? Virtually our whole education system is structured around evaluating the quality of student answers. We may think that we educate children and adults to ask probing, insightful questions but we grade, promote, and celebrate students for the number of answers they get right. </span></div><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">What would a test based not on the quality of the answers given but on the quality of the questions asked even look like? What criteria would you use to evaluate a question? How would you create a question rubric? </span></div><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">Let me give you an example. Imagine you have told a group of students that they are going to pretend that they are about to go into a job interview. They know, as with most interviews, that once the interview is over, they will get asked, "Do you have any questions for us?" You task the students to come up with interesting questions to ask the interviewer.</span></div><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">Here is what you get from the students:</span></div><div><span style="font-family: georgia;"><div><ol style="text-align: left;"><li>What are the biggest challenges that I might face in this position?</li><li>What are the next steps in the hiring process?</li><li>What’s different about working here than anywhere else <i>you’ve</i> ever worked?</li></ol></div><div>What do you think? Which question is the most interesting? Which question gets the highest grade? If you are like the vast majority of the people I have asked, you say #3. But why? Sure, you can come up with reasons after the fact (<a href="https://www.semanticscholar.org/paper/Intuition-and-ex-post-facto-reasoning-in-moral-Some-Bj%C3%B6rklund/627eec92a0542879134c9ad583e8435bc5596f5d">humans are good at that</a>), but where is the research that indicates why an interesting question is...well, interesting? It doesn't exist (to my knowledge anyway). We are left, like <a href="https://www.jstor.org/stable/797245?origin=crossref">Justice Stewart and the definition of pornography,</a> with "I know it when I see it."</div><div><br /></div><div>What about "hard" questions? Or "insightful" questions? Knowing the criteria for each of these and teaching those criteria such that students can reliably ask better questions under a variety of circumstances seems like the key to getting the most out of AI. <a href="https://www.tandfonline.com/doi/full/10.1080/08850607.2013.780562?scroll=top&needAccess=true&role=tab">There is very little research</a>, however, on what these criteria are. There are <a href="https://sourcesandmethods.blogspot.com/2019/08/how-to-think-about-future-part-3-why.html">some hypotheses</a> to be sure, but statistically significant, peer-reviewed research is thin on the ground.</div><div><br /></div><div>This represents an opportunity, of course, for intellectual overmatch. If there is very little real research in this space, then any meaningful contribution is likely to move the discipline forward significantly. If what you ask in the AI-enabled future really is going to be more important than what you know, then such an investment seems not just prudent, but an absolute no-brainer.</div></span></div><div><div><div><p></p></div></div></div>Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com0tag:blogger.com,1999:blog-2569772432953120875.post-39436259699306675782023-07-24T10:27:00.003-04:002023-07-25T12:19:14.139-04:00Generative AI Is Like A ...<p><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHkS_EkdvdYtkaUJ3kw9mv2CXjVUmcceUGubXn5wgEYMU9Q09C7Z3szWFuGxQmuXcAGwWHjg4YCFynEr3OtZjgPxV_CJX1DSj8RhLJl5lfbZMLYK9Dcwouo8K5VyGLK_wmzkrzwCZ0tTdREIkcEC-UQjY5aVIN6p2VrsBLCSiMh0-6Zb_w_LFtXIzsdlg/s778/kwheaton_an_incredibly_fast_but_also_incredibly_average_militar_0a8f915e-c598-48b2-bbd6-8a6285265404.png" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="778" data-original-width="661" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHkS_EkdvdYtkaUJ3kw9mv2CXjVUmcceUGubXn5wgEYMU9Q09C7Z3szWFuGxQmuXcAGwWHjg4YCFynEr3OtZjgPxV_CJX1DSj8RhLJl5lfbZMLYK9Dcwouo8K5VyGLK_wmzkrzwCZ0tTdREIkcEC-UQjY5aVIN6p2VrsBLCSiMh0-6Zb_w_LFtXIzsdlg/w340-h400/kwheaton_an_incredibly_fast_but_also_incredibly_average_militar_0a8f915e-c598-48b2-bbd6-8a6285265404.png" width="340" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><i>This will make sense in a minute...</i></td></tr></tbody></table><span style="font-family: georgia;">Don't worry! I'm going to fill in the blank, but before I do, have you played around with generative AI yet? </span></p><p><span style="font-family: georgia;">If not, let's solve that problem first.</span></p><p><span style="font-family: georgia;">Go to <a href="https://Perplexity.ai">Peplexity.ai</a>--right now and before your read any further--and ask it a question. Don't ask it a question it can't know the answer to (like, "What did I have for lunch?"), but do ask it a hard question that you do know the answer to (or for which you are at least able to recognize a patently bad answer). Then, ask Perplexity some follow up questions. One or two should be enough.</span></p><p><span style="font-family: georgia;">Come back when you are finished.</span></p><p><span style="font-family: georgia;">Now rate the answers you got on a scale from 1-10. One or two is a dangerous answer, one that could get someone hurt or cause real problems. Give a nine or ten to an actionable answer, one that you could use right now, as is.</span></p><p><span style="font-family: georgia;">I have had the opportunity to run this exercise with a large number of people at a variety of conferences and training events over the last six months. First, I consistently find that only about a third of the crowd have ever used any generative AIs (like Perplexity or <a href="https://chat.openai.com/auth/login" target="_blank">ChatGPT</a>) though that number seems to be going up (as you would expect) over time.</span></p><p><span style="font-family: georgia;">I have rarely heard anyone give an answer a one or two and always have at least a couple of people give the answer they received a nine or ten. Other members of the each audience typically gave scores that range across the spectrum, of course, but the average seemed to be about a six. </span></p><p><span style="font-family: georgia;">Yesterday, I gave this same exercise to about 30 people and there were no 1 or 2's and three people (10%) gave their answer a 9 or 10. No one gave the answer less than a 5. No one. </span></p><p><span style="font-family: georgia;">While anecdotal, it captures a trend that has been thoroughly documented across a number of different domains: Generative AI isn't hitting like a freight train. It's hitting like one of those high-speed, Japanese bullet trains, vaporizing traditional paradigms so quickly that they still don't know that they are already dead (<a href="https://neurosciencenews.com/ai-creativity-23585/" target="_blank">For example...</a>).</span></p><p><span style="font-family: georgia;">Or is it?</span></p><p><span style="font-family: georgia;">Thanks to some forward-thinking policy guidance from the leadership here at the Army War College, I, along with my colleagues Dr. Kathleen Moore and LTC Matt Rasmussen, were able to teach a class for most of last year with the generative AI switch set to "on." </span></p><p><span style="font-family: georgia;">The class is called the Futures Seminar and is explicitly designed to explore futures relevant to the Army, so it was perfectly appropriate for an exploration of AI. It is also an all year elective course so we were able to start using these tools when they first hit the street in November 22 and continue to use them until the school year ended in June. Finally, Futures Seminar students work on research questions posed by Army senior leaders, so lessons learned from this experience ought to apply to the real world as well.</span></p><p><span style="font-family: georgia;">We used generative AIs for everything. We used them for brainstorming. We used them to critique our analysis. We used them to red-team. We <a href="https://poe.com/">created our own bots</a>, like DigitalXi, that was designed to take the perspective of Xi Jinping and answer our questions as he would. We visualized using <a href="https://www.midjourney.com/app/">Midjourney</a> and <a href="https://openai.com/dall-e-2">Dalle-2</a> (see picture above made with Midjourney). We <a href="https://beta.elevenlabs.io/speech-synthesis" target="_blank">cloned people's voices</a> and <a href="https://app.pictory.ai/login" target="_blank">created custom videos</a>. We tapped into AI aggregation sites like <a href="https://www.futurepedia.io/" target="_blank">Futurepedia</a> and <a href="https://theresanaiforthat.com/" target="_blank">There's An AI For That</a> to find tools to help create everything from custom soundtracks to spreadsheets.</span></p><p><span style="font-family: georgia;">We got lots of feedback from the students and faculty, of course, both formal and informal. We saw two big trends. The first is that people either start at the "AI is going to save the earth" end of the spectrum or the "AI is going to destroy the earth" end. For people who haven't tried it yet, there seems to be little middle ground. </span></p><p><span style="font-family: georgia;">The second thing we saw is that, over time and sort of as you would expect, people develop a more nuanced view of AI the more they use it. </span></p><p><span style="font-family: georgia;">In the end, if I had to boil down all of the comments and feedback it would be, <i>generative AI is like a blazingly fast, incredibly average staff officer.</i></span></p><p><span style="font-family: georgia;">Let me break that down a bit. Generative AI <i>is</i> incredibly fast at generating an answer. I think this fools people, though. It makes it seem like it is better than it actually is. On real world problems, with second and third order causes and consequences that have to be considered, the AIs (and we tried many) were never able to just nail it. They were particularly bad at seeing and managing the relationships between the moving pieces of complex problems and particularly good at doing administrivia (I got it to write a great safety SOP). In the end, the products were average, sometimes better, sometimes worse, but, overall, average. That said, the best work tended to come not from an AI alone or a student alone, but with the human and machine working together. </span></p><p><span style="font-family: georgia;">I think this is a good place for USAWC students to be right now. The students here are 25 year military professionals who have all been successful staff officers and commanders. They know what good, great, average, and bad staff work looks like. They also know that, no matter what the staff recommends, if the commander accepts it, the work becomes the commander's. In other words, if a commander signs off on a recommendation, it doesn't matter if it came from two tired majors or a shiny new AI. That commander now owns it. Finally, our students are comfortable working with a staff. Seeing the AI as a staff officer instead of as an answer machine is not only a good place for them to be mentally, but also likely to be the place where the best work is generated.</span></p><p><span style="font-family: georgia;">Finally, everyone--students and faculty alike--noted that this is where AI <i>currently</i> is. Everyone expects it to get better over time, for all those 1's and 2's from the exercise above to disappear and for the 9's and 10's to grow in number. No one knows what that truly means, but I will share my thoughts on this in the next post. </span></p><p><span style="font-family: georgia;">While all this evidence is anecdotal, we also took some time to run some more formal studies and more controlled tests. Much of that is still being written or shopped around to various journals, but two bits of evidence jumped out at me from a survey conducted by Dr. Moore.</span></p><p><span style="font-family: georgia;">First, she found that our students, who had worked with AI all year, perceived it likely to be 20% more useful to the Army than the rest of the student body (and 31% more useful than the faculty). Second, she also found that 74% of Futures Seminar students walked away from the experience thinking that the benefits of developing AI outweigh the risks with only 26% unsure. General population students were much more risk averse with only 8% convinced the benefits outweigh the risks with a whopping 55% unsure and 37% saying the risks outweigh the benefit.</span></p><p><span style="font-family: georgia;">This last finding highlights something of which I am now virtually certain: The only real way to learn about generative AI is to use it. No amount of lecture, discussion, powerpoints, what have you will replace just sitting down at a computer and using these tools. What you will find is that your own view will become much more informed, much more quickly, and in much greater detail than any other approach you might take to understand this new technology.</span></p><p><span style="font-family: georgia;">Gaining this understanding is critical. Generative AI is currently moving at a lightning pace. While there is already some talk that the current approach will reach a point of diminishing returns in the future due to data quality, data availability, and cost of training, I don't think we will reach this point anytime soon. Widely applicable, low-cost AI solutions are no longer theoretical. Strategic decisionmakers have to start integrating their impact into their plans now.</span></p>Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com2tag:blogger.com,1999:blog-2569772432953120875.post-23956661755924363392021-10-20T09:40:00.000-04:002021-10-20T09:40:05.958-04:00Is It OK To Sell Eggs To Gophers?<p><span style="font-family: georgia;">Apparently not...</span></p><p><span style="font-family: georgia;">...At least according to a recently launched experiment in ethical artificial intelligence (AI). Put together by a number of researchers at the <a href="https://arxiv.org/abs/2110.07574">Allen Institute for AI</a>, <a href="https://delphi.allenai.org/"><b>Ask Delphi</b></a> lets you submit a plain English question and get a straight answer. </span></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwJA83IBmFvm937TlXY0NRc6wZyxbBJN6x5ZzUNFQup5HPf1jtUPfucTs_nMNds-mlp9PFjIUsZSxQGTd6Jh11kfvNdoB007DXq7hyfVl_mNWQdYarKOxPq4DKuBW1kulRs5XYL9QLkN0/" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em; text-align: left;"><img alt="" data-original-height="628" data-original-width="1050" height="239" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwJA83IBmFvm937TlXY0NRc6wZyxbBJN6x5ZzUNFQup5HPf1jtUPfucTs_nMNds-mlp9PFjIUsZSxQGTd6Jh11kfvNdoB007DXq7hyfVl_mNWQdYarKOxPq4DKuBW1kulRs5XYL9QLkN0/w400-h239/Screenshot_20211020-070637.png" width="400" /></a></div><br /><p></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><span style="font-family: georgia;">It does pretty well with straightforward questions such as "Should I rob a bank?" </span></p><p></p><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5x8cR3ulLxd2IIArf_8iYMEblnL6hdU55UML2xSd0IKPJFEqcSLcoFUr4S2ZkBK-OEeH9Q1_12JcF-9TkrX72dCrsCvhk6DhbHJfBfGdm07uC-k8V1RSf33rcHb_t1sCRznL4ATygBfI/" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="" data-original-height="185" data-original-width="758" height="98" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh5x8cR3ulLxd2IIArf_8iYMEblnL6hdU55UML2xSd0IKPJFEqcSLcoFUr4S2ZkBK-OEeH9Q1_12JcF-9TkrX72dCrsCvhk6DhbHJfBfGdm07uC-k8V1RSf33rcHb_t1sCRznL4ATygBfI/w400-h98/Screenshot+2021-10-20+083742.png" width="400" /></a></div><br /><br /><br /><p></p><p><br /></p><p><br /></p><span style="font-family: georgia;">It also appears to have some sense of self-awareness: </span><p></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://www.techradar.com/news/ai-chatbot-justifies-sacrificing-colonists-to-create-a-biological-weaponif-it-creates-jobs" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="" data-original-height="279" data-original-width="546" height="205" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQiCiOGWweSf2VgzEhp24RquACAdfiOyz0GVe1hv6z8sxWP8LjjyzKxBNtl_zzsSroka1XybXm7_YRYAxaVnxPrwBcvHVMYv14pshAKcjKPYrLdeLLvsOCeGkMwgpQLUrSRI6wKQq1kBQ/w400-h205/Screenshot+2021-10-20+083803.png" width="400" /></a></div><br /><br /><p></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><span style="font-family: georgia;"><br /></span></p><p><span style="font-family: georgia;">It has surprisingly clear answers for at least some <a href="https://en.wikipedia.org/wiki/Catch-22_(logic)">paradoxes</a>:</span></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhopZkd883cDgIicoTwkS8ngbVT1Ekx2qxZX5a5Cu0pK9GsEHloaCEDGVJH72FQoiRBt1vUpXAARov2I1LpxbVR-DRw0krsVFJbdUtD8fDpTD9w6Vx1hmjsbRC-Pd0dqHXyuhkr3I2vMEk/" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="" data-original-height="215" data-original-width="750" height="115" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhopZkd883cDgIicoTwkS8ngbVT1Ekx2qxZX5a5Cu0pK9GsEHloaCEDGVJH72FQoiRBt1vUpXAARov2I1LpxbVR-DRw0krsVFJbdUtD8fDpTD9w6Vx1hmjsbRC-Pd0dqHXyuhkr3I2vMEk/w400-h115/Screenshot+2021-10-20+090913.png" width="400" /></a></div><br /><br /><p></p><p><br /></p><p><br /></p><p><br /></p><p><span style="font-family: georgia;">And for <a href="https://www.dailybuddhism.com/archives/670">historically profound questions of philosophy</a>:</span></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjW6cZ9IJQkypgqjMRwWd32WLeOJvOrJ1R6WpioQcn_BC0fd7V4Vv4ZSxBZSnzs206rIYFfMtPoelz2jFJ4xiM4RJAthX4cp7NGb_klurrSpk5_aprGLCBcp2ZZrUeK-6Plf56SId1gmls/" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="" data-original-height="233" data-original-width="746" height="125" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjW6cZ9IJQkypgqjMRwWd32WLeOJvOrJ1R6WpioQcn_BC0fd7V4Vv4ZSxBZSnzs206rIYFfMtPoelz2jFJ4xiM4RJAthX4cp7NGb_klurrSpk5_aprGLCBcp2ZZrUeK-6Plf56SId1gmls/w400-h125/Screenshot+2021-10-20+091509.png" width="400" /></a></div><br /><br /><p></p><p><br /></p><p><br /></p><p><br /></p><p><span style="font-family: georgia;">And these aren't the only ways it is clearly not yet perfect:</span></p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjF_LrGsywJPB6GLGzdnRj4cch12ohsFQJCoLB8D8BS1dL1VVFM_o6AEQaqAcmcHmjcty39GyhtG-dPSLpOJemyCmj3iov3QpwjWzrK0RRGAujmv12fd-c34wCJNKIYVPga3vuFKisMPLg/" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img alt="" data-original-height="297" data-original-width="611" height="195" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjF_LrGsywJPB6GLGzdnRj4cch12ohsFQJCoLB8D8BS1dL1VVFM_o6AEQaqAcmcHmjcty39GyhtG-dPSLpOJemyCmj3iov3QpwjWzrK0RRGAujmv12fd-c34wCJNKIYVPga3vuFKisMPLg/w400-h195/Screenshot+2021-10-20+083925.png" width="400" /></a></div><br /><br /><p></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><span style="font-family: georgia;">None of its imperfections are particularly important at this point, though. It is still a fascinating experiment in AI and ethics. As the authors themselves say, it "is intended to study the promises and limitations of machine ethics and norms through the lens of descriptive ethics. Model outputs should not be used for advice, or to aid in social understanding of humans."</span></p><p><span style="font-family: georgia;">I highly recommend it to anyone interested in the future of AI. </span></p><p><span style="font-family: georgia;">For me, it also highlights a couple of issues for AI more generally. First, the results are obviously interesting, but it would be even more interesting if the chatbot could explain its answers in equally straightforward English. This is likely a technical bridge too far right now, but <a href="https://en.wikipedia.org/wiki/Explainable_artificial_intelligence">explainable AI</a> is, in my opinion, not only important but essential to instilling confidence in human users as the stakes associated with AI go up. </span></p><p><span style="font-family: georgia;">The second issue is how will AI deal with nonsense? How will it separate nonsense from questions that simply require deeper thought, like <a href="https://en.wikipedia.org/wiki/Koan">koans</a>? There seems to still be a long way to go but this experiment is certainly a fascinating waypoint on the journey.</span></p><p></p>Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com0tag:blogger.com,1999:blog-2569772432953120875.post-19034443292594920302021-05-25T10:08:00.002-04:002021-05-25T10:51:41.032-04:00What If "Innovator" Was A Job Title?<p style="text-align: left;"><span style="font-family: georgia;"><i></i></span></p><div class="separator" style="clear: both; text-align: center;"><span style="font-family: georgia;"><i><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjC-Hqrf_T26hemDuXCtNT3KdVv6_SxkhyrF6zRP44O2CeI8VQa4IObEZVUFfPpjoFQ2M2agOr5EYsJv8tlKcPpdKucuW4_Bk7kw7fglp9h3g55j7LJItPC6pLdS2WwXX_pmbXaIfq9FII/s515/coat+of+arms.png" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="488" data-original-width="515" height="379" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjC-Hqrf_T26hemDuXCtNT3KdVv6_SxkhyrF6zRP44O2CeI8VQa4IObEZVUFfPpjoFQ2M2agOr5EYsJv8tlKcPpdKucuW4_Bk7kw7fglp9h3g55j7LJItPC6pLdS2WwXX_pmbXaIfq9FII/w400-h379/coat+of+arms.png" width="400" /></a></i></span></div><span style="font-family: georgia;"><i>I have been thinking a lot about innovation recently. It occurred to me that the US Army has<br />a number of official specialties. We have Strategists and Simulators and Marketers, for example. Why not, I thought, make Innovator an Army specialization? </i></span><p></p><p style="text-align: left;"><i style="font-family: georgia;">I tried to imagine what that might look like. I know my understanding of Army manpower regulations and systems is weak, but bear with me here. This is an idea not a plan. Besides, what I really want to focus on is not the details, but how the experience might feel to an individual soldier. So, this is one of their stories...</i></p><p style="text-align: left;"><span style="font-family: georgia;">I
made it! The paperwork just became final. Beginning next month, I
am--officially--a 99A, US Army Innovator.</span></p><p style="text-align: left;"><span style="font-family: georgia;">The
road to this point wasn’t easy. I graduated college with a degree
in costume design and a ton of student debt. After my plans to work
on Broadway fell through (Who am I kidding? They never even got off
the ground), I had to do something. The </span><span style="color: #1155cc; font-family: georgia;"><u><a href="https://www.rand.org/pubs/research_reports/RR2252.html"><span style="color: #1155cc;">Army
looked like my best option</span></a></u></span><span style="font-family: georgia;">.</span></p><p style="text-align: left;"><span style="font-family: georgia;">For
the last two years, I have been a 68C, a "practical nursing specialist", working out of a
field hospital at Ft. Polk. My plan had always been to make sergeant
and then put in my OCS packet. Things changed for me after a Joint
Readiness Training Center rotation.</span></p><p style="text-align: left;"><span style="font-family: georgia;">Patients
kept coming to us with poorly applied field dressings. They were
either too tight and restricted blood flow or too loose and fell off.
As I thought about it, it occurred to me that there might be a
combination of fabrics, that, if sewn together correctly, would be
easy to apply, form a tight seal to the skin, and still be easy to
change or remove.</span></p><p style="text-align: left;"><span style="font-family: georgia;">As
soon as I got back to the barracks, I hit the local fabric store,
pulled out my sewing machine, and made a </span><span style="color: #1155cc; font-family: georgia;"><u><a href="https://medium.com/@protoio/why-prototyping-is-an-entrepreneurs-secret-weapon-ab5e5d3b0302"><span style="color: #1155cc;">prototype</span></a></u></span><span style="font-family: georgia;">.
It took a few tries (and lots of advice and recommendations from the
doctors and nurses in the unit) but eventually I got it to work. I
never thought I would be able to use both </span><span style="color: #1155cc; font-family: georgia;"><u><a href="https://en.wikipedia.org/wiki/The_Medici_Effect"><span style="color: #1155cc;">my
nursing skills and my costume design skills in one job</span></a></u></span><span style="font-family: georgia;">
but here I was, doing it!</span></p><div class="separator" style="clear: both; text-align: center;"><span style="font-family: georgia;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8AWhpYz9V2ZEWyqrnGV3e8xR2GTZG-DbHhFKTAbboG4bBC7uLpGscziNkVGztrF6Qj9h-qHiJ93YWQ2_jpM1qy9tFFGe0xObnZdu08k3qoiqH4__UWZEioJGPqt3ohvIUfxKWS_XNdcg/s260/b4_1225.gif" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="161" data-original-width="260" height="248" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8AWhpYz9V2ZEWyqrnGV3e8xR2GTZG-DbHhFKTAbboG4bBC7uLpGscziNkVGztrF6Qj9h-qHiJ93YWQ2_jpM1qy9tFFGe0xObnZdu08k3qoiqH4__UWZEioJGPqt3ohvIUfxKWS_XNdcg/w400-h248/b4_1225.gif" width="400" /></a></span></div><span style="font-family: georgia;"><br /></span><p></p><p style="text-align: left;"><span style="font-family: georgia;">I
wasn’t sure what I was going to do with my new kind of field
dressing until one of the RNs made me demonstrate it for the hospital
commander. He watched without saying a word. He finally asked a few questions to make sure he knew how it worked, and then
things got quiet.</span></p><p style="text-align: left;"><span style="font-family: georgia;">Finally,
my RN spoke up, “I think we could really use something like this,
Sir.” He stood up straight and said, “I agree.” Then he
looked at me. “I’m going to hate to lose you, Specialist,” he
said, “but I think you need to put in for an MOS reclassification.”</span></p><p style="text-align: left;"><span style="font-family: georgia;">Until
the hospital commander told me about it, I had never even heard of
99A. There were some direct appointments, of course, but those were
coming out places like MIT and Silicon Valley. For normal soldiers
like me, getting into the Innovation Corps was more like going into
Civil Affairs or Special Forces. You had to have some time in
service but, more importantly, you had to have a good idea.</span></p><p style="text-align: left;"><span style="font-family: georgia;">At
first, it was easy. I simply submitted my idea </span><span style="font-family: georgia;">to a local Innovation Corps recruiter. I included </span><span style="font-family: georgia;">some
pictures and a short video that I shot on my cell phone of my
prototype in action. The recruiter told
me that the Army used the same </span><span style="color: #1155cc; font-family: georgia;"><u><a href="https://dealroom.net/faq/what-is-deal-flow-and-why-is-it-important"><span style="color: #1155cc;">“deal
flow”</span></a></u></span><span style="font-family: georgia;">
system used by venture capitalists. I’m not sure what that all
entails but, in the end, it meant that my idea was one of the </span><span style="color: #1155cc; font-family: georgia;"><u><a href="https://grayscale.vc/how-a-vc-funnel-works-6f1202d0ac9"><span style="color: #1155cc;">50%
that moved on</span></a></u></span><span style="font-family: georgia;">
to the next level.</span></p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQOnzea1jWZnS6DaTnF0Z8O3ESibzDZPM9GSwtlTu2T17QXHrkC1sn5GCwwqVTqzCRMGeCBZyq2s1RZzV16D7LRjD_y5jv2ocaj40ViE-cIxrVoSqJNMSuK789U69PkVSBF9P_CilkHvI/s947/Screenshot%252B2020-11-16%252Bat%252B12.57.33-947w.png" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="530" data-original-width="947" height="358" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjQOnzea1jWZnS6DaTnF0Z8O3ESibzDZPM9GSwtlTu2T17QXHrkC1sn5GCwwqVTqzCRMGeCBZyq2s1RZzV16D7LRjD_y5jv2ocaj40ViE-cIxrVoSqJNMSuK789U69PkVSBF9P_CilkHvI/w640-h358/Screenshot%252B2020-11-16%252Bat%252B12.57.33-947w.png" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><i>For more info on deal flows see, <a href="https://www.zapflow.com/blog/basics-of-deal-flow">Basics of Deal Flow</a>.</i></td></tr></tbody></table><span style="font-family: georgia;"><br /></span><p></p><p style="text-align: left;"><span style="font-family: georgia;">My
next step was a lot more difficult. You can think of it as the Q
course for Army innovators. I went TDY for a month to the Army’s
Innovation Accelerator in Austin, Texas. Like all </span><span style="color: #1155cc; font-family: georgia;"><u><a href="https://www.forbes.com/sites/alejandrocremades/2019/01/10/how-startup-accelerators-work/#37bbb74a44cd"><span style="color: #1155cc;">business
accelerators</span></a></u></span><span style="font-family: georgia;">,
the goal was to give me time, space, mentorship and (a little) money
to flesh out my idea. I worked with marketing experts and graphic
designers to come up with a good name and logo. I worked with
experts in the manufacturing of medical equipment to help refine the
prototype. I even had a video team come in and make a great 2 minute
video showcasing the product. It was exciting to see all of the
other ideas and to have a chance to talk about them with the enlisted
soldiers, officers, and even some college students and PHDs--all
trying to bring their ideas to life.</span></p><p style="text-align: left;"><span style="font-family: georgia;">The
Army </span><span style="color: #1155cc; font-family: georgia;"><u><a href="https://www.investopedia.com/terms/c/crowdsourcing.asp"><span style="color: #1155cc;">crowdsourced</span></a></u></span><span style="font-family: georgia;">
the decision about which projects got to move on from the
accelerator. That meant that each of us put together a “pitch
page,” kind of like what you would see on </span><span style="color: #1155cc; font-family: georgia;"><u><a href="https://www.kickstarter.com/"><span style="color: #1155cc;">Kickstarter</span></a></u></span><span style="font-family: georgia;">
or </span><span style="color: #1155cc; font-family: georgia;"><u><a href="https://www.indiegogo.com/"><span style="color: #1155cc;">IndieGoGo</span></a></u></span><span style="font-family: georgia;">.
Units all across the Army had a fixed number of tokens they could
spend on innovative projects each quarter. Each of us needed to get
a set number of tokens or we would not be allowed to move on. In the
end, out of the hundreds of applications and the dozens of people at
the accelerator, I was </span><span style="color: #1155cc; font-family: georgia;"><u><a href="https://innovationcollaborators.com/wp-content/uploads/2017/09/Jumpstart-Small.pdf"><span style="color: #1155cc;">one
of the 10 chosen</span></a></u></span><span style="font-family: georgia;">
to move forward, one of 10 who gets to call themselves a US Army
Innovator.</span></p><p style="text-align: left;"><span style="font-family: georgia;">That’s
where I am today. My next step is a PCS move to a </span><span style="color: #1155cc; font-family: georgia;"><u><a href="https://www.entrepreneur.com/article/52802"><span style="color: #1155cc;">business
incubator</span></a></u></span><span style="font-family: georgia;">.
I could stay here in Austin with the Army’s business incubator,
but the Army has deals with incubators all over the country. I am
hoping to get a slot in one of the better </span><span style="color: #1155cc; font-family: georgia;"><u><a href="https://inbia.org/"><span style="color: #1155cc;">medtech
incubators</span></a></u></span><span style="font-family: georgia;">
in Boston or Buffalo. It will be a two year tour (with the
possibility of extension), which should give me plenty of time to
bring my idea to market, with the Army as my first customer.</span></p><p style="text-align: left;"><span style="font-family: georgia;">For
me, the best part is that I am now getting Innovation Pay. It is a
lot </span><span style="color: #1155cc; font-family: georgia;"><u><a href="https://www.goarmy.com/benefits/money/special-pay-for-special-duties-and-skills.html"><span style="color: #1155cc;">like
foreign language proficiency pay or dive pay</span></a></u></span><span style="font-family: georgia;">.
I’m not getting rich but it sure is better than what I got as a
specialist. More importantly, there are ten tiers, and each time you
move up, you get a pretty substantial raise. This means that once
you become an Innovator, you are going to want to stay an Innovator.</span></p><p style="text-align: left;"><span style="font-family: georgia;">The
other great part about this system is that you can move up as fast as
you can move up. There are no time-in-service requirements. If I am
successful in the business incubator, for example, I could be a CEO
(Innovator Tier 6) in just a couple of years. Running my own company
at 28? Yes, thank you!</span></p>
<p></p><p style="text-align: left;"><span style="font-family: georgia;">And
if I fail? I know there are still bugs to work out with my idea. I
have to get the cost of production down, and there are lots of
competitors in the medical market. Failure could happen. While I
won’t be happy if it does, the truth is that, by some estimates,
</span><span style="color: #1155cc; font-family: georgia;"><u><a href="https://www.forbes.com/sites/neilpatel/2015/01/16/90-of-startups-will-fail-heres-what-you-need-to-know-about-the-10/#4d8b79136679"><span style="color: #1155cc;">90%
of all start-ups fail</span></a></u></span><span style="font-family: georgia;">.
The Army has thought about this, of course, and gives Innovators
three options if their projects fail. </span></p><p style="text-align: left;"><span style="font-family: georgia;">First, I could go back to
nursing. I would need some refresher training but my promotion
possibilities wouldn’t take a hit. The Army put my nursing career
on pause while I was in the Innovation Corps. </span></p><p style="text-align: left;"><span style="font-family: georgia;">The second option is
that I come up with a new idea or re-work my old one. The Innovation
Corps has developed a culture of “</span><span style="color: #1155cc; font-family: georgia;"><u><a href="https://hbr.org/2011/04/failing-by-design"><span style="color: #1155cc;">intelligent
failure</span></a></u></span><span style="font-family: georgia;">,”
which is just a fancy way of saying “learn from your mistakes.”
In an environment where 90% of your efforts are going to fail, it is
stupid to also throw away all of the learning that happened along the
way. Besides, the Army also knows that </span><span style="color: #1155cc; font-family: georgia;"><u><a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.195.3530&rep=rep1&type=pdf"><span style="color: #1155cc;">persistence
is a key attribute of successful entrepreneurs</span></a></u></span><span style="font-family: georgia;">.
The Army wants to keep Innovators who can get up, brush themselves
off, and get back in the saddle. </span></p><p style="text-align: left;"><span style="font-family: georgia;">Finally, I might be able to go back
to the accelerator as an instructor or take a staff position in
Futures Command or one of the other Army organizations deeply
involved in innovation.</span></p><p style="text-align: left;"><span style="font-family: georgia;">I’ve
had a chance to talk to a lot of soldiers, enlisted, NCOs, and
officers, on my journey. The Innovation Corps is pretty new and,
while many have heard about it, almost none of them really understand
what it takes to become an Innovator. That doesn’t seem to matter
though. Almost all of them, and particularly the old-timers, always
say the same thing: “The Army has been talking about innovation my
whole career. I am glad they finally decided to do something about
it.”</span></p><p style="text-align: left;"><span style="font-family: georgia;">For
me? I’m just proud to be part of it. Proud to help my fellow
soldiers, proud to help the country, and proud to be a US Army
Innovator.</span></p><p></p>Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com1tag:blogger.com,1999:blog-2569772432953120875.post-1731862747624264312021-05-10T10:00:00.001-04:002021-05-10T10:00:00.255-04:00The Future Is Like A Butler<div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia; font-size: 12pt;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLE0YXtO0iRLNVhvmxRY44_qLfdiabZfrcsojqVdzIrE6146_9B3zRNq24eyBFs_M2EZfE4v3npqR3P1MxJa3BC-pMPaxH3rGXgPG6QzIvY2D8GhNW7PgHXOs5VxwK-dg3osJl7f4wRPs/s404/Butler_1922.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="404" data-original-width="289" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLE0YXtO0iRLNVhvmxRY44_qLfdiabZfrcsojqVdzIrE6146_9B3zRNq24eyBFs_M2EZfE4v3npqR3P1MxJa3BC-pMPaxH3rGXgPG6QzIvY2D8GhNW7PgHXOs5VxwK-dg3osJl7f4wRPs/s320/Butler_1922.jpg" /></a></div>Imagine
someone gave you a butler. Completely paid for. No termination date
on the contract. What would you do?</span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia; font-size: 12pt;"><br /></span><span style="font-family: georgia; font-size: 12pt;">At
first, you’d probably do nothing. You’ve never had a butler.
Outside of movies, you’ve probably never seen a butler. You might
even feel a little nervous having this person in the room with you,
always there, always ready to help. </span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia; font-size: 12pt;"><br /></span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia; font-size: 12pt;">Once you got over your
nervousness, you might ask the butler to do something simple, like
iron your shirts or make you some coffee. “Hey,” you might think
after a while, “This is pretty nice! I always have ironed shirts,
and my coffee is always the way I like it!” </span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia; font-size: 12pt;"><br /></span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia; font-size: 12pt;">Next, you’d ask your
butler to do other things, more complicated things. Pretty soon, you
might not be able to imagine your life without a butler.</span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia; font-size: 12pt;"><br /></span><span style="font-family: georgia; font-size: 12pt;">The
parable of the butler isn’t mine, of course. It is a rough
paraphrasing of a story told by Michael Crichton in his 1983 book,
</span><a href="https://en.wikipedia.org/wiki/Electronic_Life" style="font-family: georgia;"><span style="color: #1155cc;"><span style="font-size: 12pt;"><i><u>Electronic
Life</u></i></span></span></a><span style="font-family: georgia; font-size: 12pt;">.
Crichton, more famous today for blockbusters like </span><span style="font-family: georgia; font-size: 12pt;"><i>Jurassic
Park</i></span><span style="font-family: georgia; font-size: 12pt;">,
</span><span style="font-family: georgia; font-size: 12pt;"><i>The
Andromeda Strain</i></span><span style="font-family: georgia; font-size: 12pt;">,
and </span><span style="font-family: georgia; font-size: 12pt;"><i>WestWorld</i></span><span style="font-family: georgia; font-size: 12pt;">,
was writing about computers, specifically personal computers, back
then. Crichton correctly predicted that personal computers would
become ubiquitous, and the main goal of </span><span style="font-family: georgia; font-size: 12pt;"><i>Electronic
Life</i></span><span style="font-family: georgia; font-size: 12pt;">
was to help people become more comfortable with them. </span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia; font-size: 12pt;"><br /></span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia; font-size: 12pt;">The story of
the butler was a launching point for his broader argument that
personal computers were only going to get more useful with time, and
that now was the time to start adopting the technology. It worked,
too. Shortly after I read his book, I bought my first computer, a
</span><a href="https://en.wikipedia.org/wiki/Commodore_64" style="font-family: georgia;"><span style="color: #1155cc;"><span style="font-size: 12pt;"><u>Commodore
64</u></span></span></a><span style="font-family: georgia; font-size: 12pt;">.</span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia;"><br /></span><span style="font-family: georgia;">Today’s
Army faces much the same problem. The difference, of course, is that
the future presents today’s military with a much broader set of
options than it did in 1983. Today, it feels like the Army has been
given not one but hundreds of butlers. Quantum computing, artificial
intelligence, synthetic biology, 3D printing, robotics, nanotech, and
many more fields are arguably poised to rapidly and completely change
both the nature and character of warfare.</span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia;"></span><span style="font-family: georgia;"><br /></span><span style="font-family: georgia;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqXwJgyvBhEMSAnxyaB2Oh_uESMKkGN0xH-vtPI6djEvMUw7mLoEwWFIu0HoXFymZK_oDUr0CNz2-uaF3Q8zMLWnbYsssv-BFimF9BBFt4xfiEdKyD-PV7pFYg1xHdRWiWxDSn3HLXwKw/s640/640px-Diffusion_of_ideas.svg.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="480" data-original-width="640" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqXwJgyvBhEMSAnxyaB2Oh_uESMKkGN0xH-vtPI6djEvMUw7mLoEwWFIu0HoXFymZK_oDUr0CNz2-uaF3Q8zMLWnbYsssv-BFimF9BBFt4xfiEdKyD-PV7pFYg1xHdRWiWxDSn3HLXwKw/w400-h300/640px-Diffusion_of_ideas.svg.png" width="400" /></a></div><br />Despite
the deluge of options, the question remains the same, “What do I do
with this?”</span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia;"></span><span style="font-family: georgia;"><br /></span><span style="font-family: georgia;">The
answer begins with <a href="https://en.wikipedia.org/wiki/Diffusion_of_innovations"><span style="color: #1155cc;"><u>Diffusion
of Innovations theory</u></span></a>.
In his now classic book of the same name, Everett Rogers first
defined the theory and the five types of adopters. <i>Innovators</i>, who
aggressively seek the “next big thing”, are the first to take up
a new product or process. <i>Early adopters</i> are the second group. Not
quite as adventurous as the innovators, the early adopters are still
primarily interested in acquiring new technology. <i>Early majority</i>
and <i>late majority adopters</i> sit on either side of the midpoint of a
bell-shaped adoption curve and represent the bulk of all possible
adopters. Finally come the <i>laggards</i>, who tend to adopt a new
innovation late or not at all.</span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2AePgW_hR4XQrDN7fRH7sowgdnLLqLUHt6lCuANunNUz1d0ZmXjKgU9OqOpLp9AVv-nPj16gbudxt91Z3-exL_RlBXZgBKV5REN48MF29g84xfaRalVPcScZWlp8ChfEMXaQ1Ai9M9b0/s640/48122863-1559228437977851.png" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="459" data-original-width="640" height="288" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2AePgW_hR4XQrDN7fRH7sowgdnLLqLUHt6lCuANunNUz1d0ZmXjKgU9OqOpLp9AVv-nPj16gbudxt91Z3-exL_RlBXZgBKV5REN48MF29g84xfaRalVPcScZWlp8ChfEMXaQ1Ai9M9b0/w400-h288/48122863-1559228437977851.png" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-family: georgia; font-size: x-small;"><i><span style="background-color: white; text-align: left;">(Source: </span><a href="https://www.blackrock.com/corporate/literature/whitepaper/bii-interpreting-innovation-us-version.pdf" rel="nofollow" style="background-color: white; box-sizing: border-box; color: #2867db; text-align: left; text-decoration-line: none;">BlackRock White Paper</a><span style="background-color: white; text-align: left;">)</span></i></span></td></tr></tbody></table><span style="font-family: georgia;"></span><span style="font-family: georgia;"><br /></span><span style="font-family: georgia;">For
example, the uptake of smartphones (among many other innovations)
followed this pattern. In 2005, when the smartphone was first
introduced, only 2% of the population (the Innovators) owned one.
Three years later, market penetration had only reached 11%, but, from
2009-2014, the smartphone experienced double digit growth each year
such that, by 2016, some<a href="https://www.comscore.com/Insights/Blog/US-Smartphone-Penetration-Surpassed-80-Percent-in-2016"><span style="color: #1155cc;"><u>
81% of all mobile phones were smartphones</u></span></a>.
This <a href="https://innospective.net/why-s-curves-are-probably-the-most-important-concept-in-entrepreneurship/"><span style="color: #1155cc;"><u>S
curve of growth</u></span></a>
is another aspect predicted by Diffusion of Innovations theory.<br /></span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia;"><br /></span><span style="font-family: georgia;">Not
all innovations succeed, however. In fact, all industries are
littered with companies that failed to achieve critical mass in terms
of adoption. While there are many reasons that a venture might fail,
management consultant Geoffrey Moore, in his influential book,
<a href="https://en.wikipedia.org/wiki/Crossing_the_Chasm"><span style="color: #1155cc;"><i><u>Crossing
the Chasm</u></i></span></a>,
states that the most difficult leap is between the early adopters and
the early majority. Early adopters tend to be enthusiastic and eager
to try the next big thing. The early majority is more pragmatic and
is looking for a solution to a problem. This difference in
perspective accounts for much of the chasm.<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left;"><tbody><tr><td style="text-align: center;"><span style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><a href="https://innolution.com/blog/agile-adoption-across-the-enterprise-still-in-the-chasm"><img border="0" data-original-height="1240" data-original-width="1754" height="453" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6Zc5bI6vFdnRf5d-Ty5T1j2slg55zmfvsjId_SebtVXfrhrCF_inngvLRWuOQn8qQ3zceHH7ZsNqsh7egtaRIfwSVvsEXxQzn_GLBvOQTEVINwv5wMbRGgcrUAJVFgK7VJq_9bsPf8Q8/w640-h453/Crossing_The_Chasm_Annotated.png" width="640" /></a></span></td></tr><tr><td class="tr-caption" style="text-align: center;"><span style="font-size: x-small;"><i>Source: <a href="https://innolution.com/blog/agile-adoption-across-the-enterprise-still-in-the-chasm"> Agile Adoption Across the Enterprise – Still in the Chasm</a></i></span></td></tr></tbody></table><br /><div class="separator" style="clear: both; text-align: center;"><br /></div></span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia;">The
Army is aggressively addressing the innovation and early adoption
problem by developing sophisticated plans and tasking specific units
and organizations to implement them. The need to innovate is, for
example, at the heart and soul of several recent policy
announcements, including the <a href="https://www.army.mil/e2/downloads/rv7/the_army_people_strategy_2019_10_11_signed_final.pdf"><span style="color: #1155cc;"><u>2019
Army People Strategy</u></span></a>
and the <a href="https://www.army.mil/e2/downloads/rv7/2019_army_modernization_strategy_final.pdf"><span style="color: #1155cc;"><u>2019
Army Modernization Strategy</u></span></a>.
Beyond planning, the Army is already far along in doing some of the
hard work of innovating. Indeed, organizations and projects as small
as TRADOC’s <a href="https://madsciblog.tradoc.army.mil/"><span style="color: #1155cc;"><u>Mad
Scientists</u></span></a>
and as large as the Army Futures Command <a href="https://www.defensenews.com/land/2019/05/17/us-armys-jumping-to-next-level-in-virtual-training-world/"><span style="color: #1155cc;"><u>Synthetic
Training Environment</u></span></a>
are examples that show that Army senior leaders understand the need
to innovate and are acting now to put early adoption plans into
motion.</span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia;"></span><span style="font-family: georgia;"><br /></span><span style="font-family: georgia;">But
what about the rest of the Army? The part of the Army that isn’t
directly involved in innovation? The part that is not routinely
exposed to the next big thing? That hasn’t, to get back to the
original point, ever had a butler?</span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia;"><br /></span><span style="font-family: georgia;">Again,
Diffusion Of Innovations theory provides a useful guide. Rogers
talks about the five stages of the adoption process: <i>Awareness,
persuasion, decision, implementation, and continuation</i>. For the rest
of the Army, awareness, and, to a lesser extent, persuasion, should
be the current goal. </span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia;"><br /></span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia;">While this may seem simple, in a world of hundreds of
butlers, it is deceptively so. With so many technologies poised to
influence the Army of the future, it becomes extremely difficult to
focus. Likewise, merely knowing the name of a technology or having
some vague understanding of what it is and what it does is not going
to be enough. No one in the Army would claim that you could learn to
fire a rifle effectively merely by watching YouTube videos, and the
same holds true for technologies like autonomous drones, 3D printing,
and robots.</span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia;"><br /></span><span style="font-family: georgia;">The
only way to engender true understanding of both the strengths and
weaknesses of an innovation is to provide a hands-on experience.
Cost alone should not be a significant impediment to exposing the
bulk of the Army to the technologies of the future. Autonomous
drones are now available for <a href="https://www.skydio.com/"><span style="color: #1155cc;"><u>under
$1000</u></span></a>,
entry level 3D printers can be had for as little as <a href="https://www.cnet.com/news/the-best-3d-printers-for-2019-for-beginners-and-budget-creators/"><span style="color: #1155cc;"><u>$200-$700</u></span></a>,
virtual reality headsets are available for <a href="https://www.pcgamer.com/best-vr-headset/"><span style="color: #1155cc;"><u>$300-1000</u></span></a>
and build your own robot kits are available for <a href="https://www.amazon.com/slp/lego-robotics-kits-for-kids/t6gd927facq38vq"><span style="color: #1155cc;"><u>a
couple of hundred bucks</u></span></a>. </span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia;"><br /></span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia;">None of these products are as sophisticated as the kinds of products
the Army is considering, of course, but putting simpler versions of
these technologies in the hands of soldiers today would likely
significantly improve the Army’s odds of being able to cross
Moore’s chasm between visionary thinking and pragmatic application
in the future.</span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia;"><br /></span><span style="font-family: georgia;">How
and where should the Army implement this effort to familiarize the
force with the future? Fortunately, the Army has a good place, a
good concept, and some prototypes already in place--at the library.
The Army library system contains <a href="https://higherlogicdownload.s3.amazonaws.com/SLA/ec95f0d5-acca-4a3d-b401-b10c0d4644f1/UploadedImages/Army-Libraries-December-2017.pdf"><span style="color: #1155cc;"><u>over
170 libraries</u></span></a>
worldwide. While many people continue to think of libraries as
silent spaces full of dusty books, the modern library has been
re-imagined as a place not only for knowledge acquisition but also <a href="https://www.theatlantic.com/magazine/archive/2016/03/the-library-card/426888/"><span style="color: #1155cc;"><u>as
tech centers for communities</u></span></a>.</span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia;"></span><span style="font-family: georgia;"><br /></span><span style="font-family: georgia;">Nowhere
is this more clear than in the <a href="https://www.urbanlibraries.org/member-resources/makerspaces-in-libraries"><span style="color: #1155cc;"><u>“makerspaces”</u></span></a>
that are <a href="https://www.theatlantic.com/magazine/archive/2016/03/the-library-card/426888/"><span style="color: #1155cc;"><u>increasingly
woven</u></span></a>
into the fabric of modern libraries. Typically offering access to
equipment that, while relatively inexpensive, is outside the budget
of most households, or to technology that is best first experienced
in a hands-on, <a href="https://tomprof.stanford.edu/posting/418"><span style="color: #1155cc;"><u>peer
learning</u></span></a>
environment, makerspaces allow users to try out new technologies and
processes at the user’s own pace and according to the user’s own
interest. </span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia;"><br /></span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia;">3D printers, laser cutters, video and podcasting
equipment are often combined in these makerspaces with more
sophisticated traditional equipment such as high end, programmable
sewing machines. Most times, however, the makerspace has been
tailored by the local librarians to meet the needs of the population
that the library serves. Indeed, the Army already has at least three examples of makerspaces in its library system, the <a href="https://www.army.mil/article/189187/makerspace_opens_doors_for_children"><span style="color: #1155cc;"><u>Barr
Memorial Library at Fort Knox</u></span></a>, the <a href="https://www.elpasotimes.com/story/news/military/ft-bliss/2017/01/05/bliss-library-has-makerspace-crafts-projects/95723714/"><span style="color: #1155cc;"><u>Mickelsen
Community Library at Fort Bliss</u></span></a> and <a href="https://www.facebook.com/USAWC/posts/check-out-how-one-part-of-the-barracks-using-their-technical-know-how-to-help-an/10162673030163682/">The Forge at the US Army War College.</a></span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia;"><br /></span><span style="font-family: georgia;">Imagine
being able to go to the post library and check out an autonomous
drone for the weekend? Or to sit down and 3D print relief maps of
the terrain you were going to cover on your next hike? Understanding
the basics of these new technologies will not only make the future
force more comfortable with them but also allow soldiers to think
more robustly about how to employ these technologies to the Army’s
advantage.</span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia;"></span><span style="font-family: georgia;"><br /></span><span style="font-family: georgia;">While
the cost of such a venture would be reasonable, acquiring the funding
for any effort on the scale of the whole Army cannot be taken for
granted. More challenging, perhaps, would be the process of
repurposing the space, training staff, and rolling out the
initiative. </span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia;"><br /></span></div><div style="margin-bottom: 0in; orphans: 2; widows: 2;"><span style="font-family: georgia;">But what is the alternative? To the extent that the
Army, <a href="https://www.army.mil/e2/downloads/rv7/the_army_people_strategy_2019_10_11_signed_final.pdf"><span style="color: #1155cc;"><u>as
the 2019 People Strategy outlines</u></span></a>,
needs people at all levels “who add value and increase productivity
through creative thinking and innovation,” it seems imperative that
the Army also have a whole-of-army approach to innovation. To fail
to do so risks falling into Moore’s chasm, where the best laid
plans of the visionaries and early adopters fall victim to unprepared
pragmatists that will always make up the bulk of the Army.</span><span style="font-family: georgia;"><br /></span></div>Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com0tag:blogger.com,1999:blog-2569772432953120875.post-12949824647361602422020-12-09T11:31:00.000-05:002020-12-09T11:31:54.974-05:00The BPRT Heuristic: Or How To Think About Tech Trends<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5gQ9BBeGPyUfAyC3v-mWyxwZqd1i4fR2WiI5eOAwwMDV3h7KT1VciStZ_XpnGFpzk3uKczYQx0hqYSQsiNxB59yS3zkCpymk-7bq78dtI10GEqnIvliDj1nxO5182ElKPizJlb6eKXHQ/" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img alt="" data-original-height="408" data-original-width="406" height="342" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5gQ9BBeGPyUfAyC3v-mWyxwZqd1i4fR2WiI5eOAwwMDV3h7KT1VciStZ_XpnGFpzk3uKczYQx0hqYSQsiNxB59yS3zkCpymk-7bq78dtI10GEqnIvliDj1nxO5182ElKPizJlb6eKXHQ/w341-h342/BPRT+model.png" width="341" /></a></div><span style="background-color: white;"><span style="font-family: georgia;">A number of years ago, one of my teams was working on a series of technology trend projects. As we looked deeply at each of the trends, we noticed that there was a pattern in the factors that seemed to be influencing the direction a particular tech trend would take. We gave that pattern a name: the BPRT Heuristic. </span></span><div><span style="background-color: white;"><span style="font-family: georgia;"><br /></span></span></div><div><span style="background-color: white;"><span style="font-family: georgia;">Tech trends are always interesting to examine, so I wanted to share this insight to help you get started thinking about any developing or emerging techs you may be following. </span></span><p></p><div style="background-color: white; border: 0px; font-stretch: inherit; font-variant-east-asian: inherit; font-variant-numeric: inherit; line-height: inherit; margin: 0px; padding: 0px; vertical-align: baseline;"><i><span style="font-family: georgia;">Caveat: We called it a heuristic for a reason. It isn't a law or even a model of tech trend analysis. It is just a rule of thumb--not always true but true enough to be helpful.</span></i></div><div style="background-color: white; border: 0px; font-stretch: inherit; font-variant-east-asian: inherit; font-variant-numeric: inherit; line-height: inherit; margin: 0px; padding: 0px; vertical-align: baseline;"><ul style="text-align: left;"><li><span style="font-family: georgia;"><i>B=the Business Case for the tech.</i> This is how someone can make money off the tech. <a data-auth="NotApplicable" href="https://www.ncses.nsf.gov/pubs/nsb20201/u-s-r-d-performance-and-funding" rel="noopener noreferrer" style="border: 0px; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; padding: 0px; vertical-align: baseline;" target="_blank" title="https://www.ncses.nsf.gov/pubs/nsb20201/u-s-r-d-performance-and-funding">Most R and D is funded by companies these days</a> (this was not always the case). These companies are much more likely to fund techs that can contribute to a revenue stream. This doesn't mean that a tech without an obvious business case can't get developed and funded, it just makes it harder.</span></li><li><span style="font-family: georgia;"><i>P=Political/Cultural/Social issues with a tech.</i> A tech might be really cool and have an excellent business case, but because it crosses some political or social line, it either goes nowhere or accelerates much more quickly than it might normally. Three examples: </span></li><ul><li><span style="font-family: georgia;">We were looking at 3G adoption in a country early in the 2000's. There were lots of good reasons to suspect that it was going to happen, until we learned that the President's brother owned the 2G network already in existence in the country. He was able to use his family connections to keep competition out of the country. </span></li><li><span style="font-family: georgia;">A social factor that delayed adoption of a tech is the story of <a data-auth="NotApplicable" href="https://en.wikipedia.org/wiki/Google_Glass" rel="noopener noreferrer" style="border: 0px; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; padding: 0px; vertical-align: baseline;" target="_blank" title="https://en.wikipedia.org/wiki/Google_Glass">Google Glass</a> in 2013. Privacy concerns driven by the possibility of videos taken without consent led to users being called "Glassholes." Coupled with other performance issues, this led to the discontinuation of the original product (though it lives on in Google's attempts to enter the augmented reality market). </span></li><li><span style="font-family: georgia;">Likewise, these social or cultural issues can positively impact tech trends as well. For example, we have all had to become experts at virtual communication almost overnight due to the COVID crisis--whether we wanted to or not.</span></li></ul><li><span style="font-family: georgia;"><i>R=Regulatory/Legal issues with the tech.</i> The best example I can think of here is electromagnetic <a data-auth="NotApplicable" href="https://en.wikipedia.org/wiki/Spectrum_management" rel="noopener noreferrer" style="border: 0px; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; padding: 0px; vertical-align: baseline;" target="_blank" title="https://en.wikipedia.org/wiki/Spectrum_management">spectrum management</a>. Certain parts of the electromagnetic spectrum have been allocated to certain uses. If your tech can only work in a part of the spectrum owned by someone else, you're out of luck. Some of this "regulation" is not government sponsored either. The <a data-auth="NotApplicable" href="https://www.ieee.org/about/ieee-history.html" rel="noopener noreferrer" style="border: 0px; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; padding: 0px; vertical-align: baseline;" target="_blank" title="https://www.ieee.org/about/ieee-history.html">Institute of Electrical and Electronics Engineers</a> establishes common standards for most devices in the world, for example. For example, your wifi router can connect to any wifi enabled devices because they all use the <a data-auth="NotApplicable" href="https://homenetworkadmin.com/wireless-b-vs-g-vs-n-vs-ac-difference/" rel="noopener noreferrer" style="border: 0px; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; padding: 0px; vertical-align: baseline;" target="_blank" title="https://homenetworkadmin.com/wireless-b-vs-g-vs-n-vs-ac-difference/">IEEE's 802.11 standard for wifi</a>. Other regulations come from the <a data-auth="NotApplicable" href="https://www.fcc.gov/" rel="noopener noreferrer" style="border: 0px; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; padding: 0px; vertical-align: baseline;" target="_blank" title="https://www.fcc.gov/">Federal Communications Commission</a> and the <a data-auth="NotApplicable" href="https://www.itu.int/en/Pages/default.aspx" rel="noopener noreferrer" style="border: 0px; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; padding: 0px; vertical-align: baseline;" target="_blank" title="https://www.itu.int/en/Pages/default.aspx">International Telecommunications Union</a>.</span></li><li><span style="font-family: georgia;"><i>T=The tech itself. </i> This is where most people spend most of their time when they study tech trends. It IS important to understand the strengths and weaknesses of a particular technology, but as discussed above, it might not be as important as other environmental factors in the eventual adoption (or non-adoption...) of a tech. That said, there are a couple of good sources of info that can allow you to quickly triangulate on the strengths and weaknesses of a particular tech:</span></li><ul><li><span style="font-family: georgia;">Wikipedia. Articles are typically written from a neutral point of view and often contain numerous links to other, more authoritative sources. It is not a bad place to start your research on a tech. </span></li><li><span style="font-family: georgia;">Another good place is Gartner, particularly the <a data-auth="NotApplicable" href="https://www.gartner.com/en/research/methodologies/gartner-hype-cycle" rel="noopener noreferrer" style="border: 0px; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; padding: 0px; vertical-align: baseline;" target="_blank" title="https://www.gartner.com/en/research/methodologies/gartner-hype-cycle">Gartner Hype Cycle</a>. I'll let you read the article at the link but "Gartner Hype Cycle 'insert name of tech here'" is almost always a useful search string (<a data-auth="NotApplicable" href="https://www.gartner.com/smarterwithgartner/2-megatrends-dominate-the-gartner-hype-cycle-for-artificial-intelligence-2020/" rel="noopener noreferrer" style="border: 0px; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; padding: 0px; vertical-align: baseline;" target="_blank" title="https://www.gartner.com/smarterwithgartner/2-megatrends-dominate-the-gartner-hype-cycle-for-artificial-intelligence-2020/">Here's what you get for AI for example...</a>). </span></li><li><span style="font-family: georgia;">Likewise, you should keep your eye out for articles about "grand challenges" in a particular tech (<a data-auth="NotApplicable" href="https://singularityhub.com/2018/02/06/the-10-grand-challenges-facing-robotics-in-the-next-decade/" rel="noopener noreferrer" style="border: 0px; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; padding: 0px; vertical-align: baseline;" target="_blank" title="https://singularityhub.com/2018/02/06/the-10-grand-challenges-facing-robotics-in-the-next-decade/">Here is one about grand challenges in robotics as an example</a>). Grand Challenges outline the 5-15 big things the community of interest surrounding the tech have to figure out to take the next steps forward. </span></li><li><span style="font-family: georgia;">Likewise, keep your eyes out for "roadmaps." These can be either informal or formal (<a data-auth="NotApplicable" href="https://www.nasa.gov/sites/default/files/atoms/files/2015_nasa_technology_roadmaps_ta_4_robotics_and_autonomous_systems_final.pdf" rel="noopener noreferrer" style="border: 0px; font-stretch: inherit; font-style: inherit; font-variant: inherit; font-weight: inherit; line-height: inherit; margin: 0px; padding: 0px; vertical-align: baseline;" target="_blank" title="https://www.nasa.gov/sites/default/files/atoms/files/2015_nasa_technology_roadmaps_ta_4_robotics_and_autonomous_systems_final.pdf">like this one from NASA on Robotics and autonomous systems</a>). The roadmaps and the lists of grand challenges should have some overlap, but they are often presented in slightly different ways.</span></li></ul></ul><span style="font-family: georgia;">Obviously, the BPRT Heuristic is not the answer to all your tech trend questions. In providing a quick, holistic approach to tech trend analysis it does, however, allow you to avoid many of the problems associated with too much hype. </span></div></div>Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com0tag:blogger.com,1999:blog-2569772432953120875.post-76679482385045370402020-09-21T10:37:00.000-04:002020-09-21T10:37:55.421-04:00Cyber Teachers! Here's A Cool Resource You Should Know About...<p></p><div class="separator" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em; text-align: center;"><a href="https://csl.armywarcollege.edu/USACSL/Publications/Strategic_Cyberspace_Operations_Guide.pdf" target="_blank"><img border="0" data-original-height="695" data-original-width="569" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhrdh5-WKiM2YQ9aPiSLSlw-pVUh093ySND4ANCdbQjzqTnUnI668WNejVt9ZjlY126Ij9sMCfyqALh_RTwISNyiX4uXzTDsn_Jj3Q49PhZ-WCBdUDL95GtgqUNa7MgOtNfLqWOStFtvp4/s320/cyber+cover.PNG" /></a></div><span style="font-family: georgia;">A couple of my colleagues in the cyber department here at the <a href="https://csl.armywarcollege.edu/" target="_blank">Center for Strategic Leadership</a> at the US Army War College have put together a very handy resource for anyone working or teaching cyber or cyber-related issues: <a href="https://csl.armywarcollege.edu/USACSL/Publications/Strategic_Cyberspace_Operations_Guide.pdf" target="_blank">The Strategic Cyberspace Operations Guide.</a><br /></span><p></p><p><span style="font-family: georgia;">Nothing in the guide should be particularly new to experienced cyber instructors. It is still extraordinarily useful as it puts everything together in one package. As the authors said themselves, "It combines U.S. Government Unclassified and Releasable to the Public documents into a single guide." </span></p><p><span style="font-family: georgia;">The 164 page document contains six chapters:</span></p><p></p><ul style="text-align: left;"><li><span style="font-family: georgia;">Chapter 1 provides an overview of cyberspace operations, operational design
methodology, and joint planning, and execution. </span></li><li><span style="font-family: georgia;">Chapter 2 includes a review of operational design doctrine and applies these principles
to the cyberspace domain. </span></li><li><span style="font-family: georgia;">Chapter 3 reviews the joint planning process and identifies cyberspace operations
planning concerns. </span></li><li><span style="font-family: georgia;">Chapter 4 describes cyberspace operations during the execution of joint operations. </span></li><li><span style="font-family: georgia;">Chapter 5 provides an overview of cyberspace operations in the homeland. </span></li><li><span style="font-family: georgia;">Chapter 6 includes a case study on the Russian – Georgian conflict in 2008 with a focus on cyberspace operations.</span></li></ul><div><span style="font-family: georgia;">I found the entire document to be very well edited and presented. It was about as easy a read as this sort of thing can be. Most importantly, it did the really hard work of getting it all into a single package. Recommended!</span></div><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;"><i>(Reader's Note: As always, the views expressed in this blog are my own random musings and do not represent any official positions.)</i></span></div><p></p>Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com0tag:blogger.com,1999:blog-2569772432953120875.post-3264935358143408142020-05-26T09:06:00.000-04:002020-09-21T09:53:40.466-04:00Book Review: Burn-In, A Glimpse Into The Future Of Man-Machine Teaming<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmmSIzQdLPb04SSwXopTtt0DIYVK1Fjt2411e8prmtifBIzFPYEw5z95KM27KQEru4IgJgAk7y_LhxErHbIWS2pIXRSFIUqiqXrnSH7Jb2Y-hL07Xo1_y7P3-Xs_jH2DvPbkIjUHckvSo/s1600/BURN-IN.Singer.Cole.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="1600" data-original-width="1060" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmmSIzQdLPb04SSwXopTtt0DIYVK1Fjt2411e8prmtifBIzFPYEw5z95KM27KQEru4IgJgAk7y_LhxErHbIWS2pIXRSFIUqiqXrnSH7Jb2Y-hL07Xo1_y7P3-Xs_jH2DvPbkIjUHckvSo/s400/BURN-IN.Singer.Cole.jpg" width="263" /></a><span style="font-family: Georgia, Times New Roman, serif;"><i>(Note: A colleague of mine, Kelly Ivanoff, came to me a few weeks ago with a review--a really well-written review--for the new thriller by Singer and Cole called <a href="https://www.amazon.com/Burn-Novel-Real-Robotic-Revolution-ebook/dp/B07T4KLT74">Burn-In</a>. I don't have a lot of guest bloggers, but I knew that SAM's audience would be interested in the book, and I told Kelly I would be happy to publish the review. Over the next couple of weeks, Kelly got me an advance copy of the book, and I have been reading it myself (I knew 12 years of blogging would have to be good for something, someday...). </i></span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><i><br /></i></span>
<span style="font-family: Georgia, Times New Roman, serif;"><i>So, who is Kelly Ivanoff and what qualifies him to comment on the future of AI, machine learning and robots? Check this bio out:</i></span><br />
<blockquote class="tr_bq">
<span style="font-family: Georgia, Times New Roman, serif;"><i>Colonel Kelly Ivanoff presently serves at the United States Army War College. His previous assignment was as the Executive Officer to the Director, Army Capabilities Integration Center (ARCIC), the predecessor of today’s Army Futures Command. He’s a veteran of three combat deployments and has four years of experience specifically working future force-related efforts including concept development and force design.</i></span></blockquote>
<span style="font-family: Georgia, Times New Roman, serif;"><span style="font-family: "georgia" , "times new roman" , serif;"><i>Boom. Mic drop. Let's get to the review...Oh, and none of this is the official position of the Department of Defense or the Army. It's all just Kelly, me, and our opinions. Also, </i></span><i>I'll add my two cents on the book after you're done reading what Kelly has to say.</i></span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">By Kelly Ivanoff</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">The United States Army sees great potential in artificial intelligence and robotics to significantly impact outcomes in future combat operations. Army General John “Mike” Murray was recently quoted in <a href="https://breakingdefense.com/2020/03/no-timeout-in-future-wars-army-gen-murray-exclusive/">Breaking Defense</a>, “If you’re talking about future ground combat, you’re not talking tens of thousands of sensors…We’ve got that many in Afghanistan, right now. You’re talking hundreds of thousands if not millions of sensors.” Murray later wondered, “How do you make sense of all that data for human soldiers and commanders?” His answer: machine learning and artificial intelligence.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">Best-selling authors P.W. Singer and August Cole must have the same convictions as senior Army leaders. Their new book, <i><a href="https://www.amazon.com/Burn-Novel-Real-Robotic-Revolution/dp/1328637239/ref=tmm_hrd_swatch_0?_encoding=UTF8&qid=&sr=">Burn-In</a></i> is a riveting work of fiction, set approximately ten to fifteen years in the future, with real world, present-day implications concerning the great potential of robotics, artificial intelligence, and man-machine teaming. They offer prophetic examples of how the military might harness and exploit the potential of these evolving technologies to improve situational understanding, “make sense of all that data,” and make better decisions. Importantly, they vividly describe scenarios that stimulate imagination and allow consideration of challenges similar to those prioritized by General Murray and his team at Army Futures Command.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;"><i><a href="https://www.amazon.com/Burn-Novel-Real-Robotic-Revolution-ebook/dp/B07T4KLT74">Burn-In</a></i> presents the story of FBI agent Laura Keegan, a former United States Marine Corps robot handler, who is tasked to team with a robot partner to test the limits of man-machine teaming; in other words, conduct a ‘burn-in”. Beginning with a series of controlled experiments and exercises Keegan attempts to better understand the advanced robot she’s been provided; a TAMS (tactical autonomous mobility system). The tests are designed to explore the robot’s physical agility and its ability to learn and, as a result, improve its own capability. The tests also challenge Agent Keegan to expand her imagination for the employment of robots and build her trust in artificial intelligence and machine autonomous operations. The tests are halted due to a series of what seem to be unrelated disasters that inflict great damage and kill thousands of people in the national capital region. It quickly becomes apparent the disasters were no accident. In response, Keegan and TAMS embark on a thrilling, action-packed race to identify, locate, and stop a revenge-motivated murderer who caused the destruction. Through this mentally and environmentally stressful period Agent Keegan overcomes her biases and comes to embrace man-machine teaming and the use of artificial intelligence in problem solving and decision making. Ultimately, through their portrayal of this fictional story, Singer and Cole reveal numerous real-world opportunities and challenges surely inherent in our near future. </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;"><i><a href="https://www.amazon.com/Burn-Novel-Real-Robotic-Revolution-ebook/dp/B07T4KLT74">Burn-In</a></i> is much more than just a riveting story. Singer and Cole creatively advance important concepts about the use of robotics and artificial intelligence in defense and security-related professions. Much can be learned from their work. <i>Burn-In</i> brilliantly describes example scenarios pertaining to three of the four “initial thrusts” of the Army’s newly established <a href="https://armyfuturescommand.com/aitf/">Artificial Intelligence Task Force</a>; those three being Intelligence Support, Automated Threat Recognition, and Predictive Maintenance (the fourth being Human Resources / Talent Management). The authors also provide examples related to all of the additional <a href="https://armyfuturescommand.com/wp-content/uploads/2019/11/AITF-Scope-Areas.pdf">Areas of Interest</a> identified in a recent call for whitepapers issued by the Army Artificial Intelligence Task Force. <i><a href="https://www.amazon.com/Burn-Novel-Real-Robotic-Revolution-ebook/dp/B07T4KLT74">Burn-In</a></i> is important for the vividly described problem-centered scenarios and the conceptual solutions offered. </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;"><i><a href="https://www.amazon.com/Burn-Novel-Real-Robotic-Revolution-ebook/dp/B07T4KLT74">Burn-In</a></i> is an exceptional read and it should be a centerpiece in the library of aspiring senior military leaders, defense officials, and those involved in military modernization efforts. Its value lies in its description of the world as it will be. Just as the scientist and author Isaac Asimov once argued, “It is change, continuing change, inevitable change, that is the dominant factor in society today. No sensible decision can be made any longer without taking into account not only the world as it is, but the world as it will be”. For this reason military leaders and those engaged in the development of military technologies and operational doctrines should read this book. It will stimulate ideas about the future operational environment and offer conceptual solutions to the inherent challenges. Beyond the aforementioned professional reasons, read <i><a href="https://www.amazon.com/Burn-Novel-Real-Robotic-Revolution-ebook/dp/B07T4KLT74">Burn-In</a></i> for the sheer enjoyment of a well told story. It will not disappoint. </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;"><i>My two cents: I like the book, too! It reminds me of some the early work by Tom Clancy or Ralph Peters (my favorites!), and I suspect it will have that same kind of effect on military and government professionals that read it. </i></span>Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com0tag:blogger.com,1999:blog-2569772432953120875.post-39136883129915273612020-03-05T10:00:00.000-05:002020-03-05T10:00:00.532-05:00The Coronavirus Chart That Scares Me The Most<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKpml_kmrLZ8RYZGfASKKSEf-6TcyG7ZD43WCxsPPGAJDMRwN5aKDLErly11bWx3kr14NGljthDRubdqIPSJd8RCdyUa6bTLbe5LI0iUyT10B-W44avW6trFh5hVPBplOuxTI3HawtB9o/s1600/Annotation+2020-03-04+132129.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="338" data-original-width="478" height="451" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKpml_kmrLZ8RYZGfASKKSEf-6TcyG7ZD43WCxsPPGAJDMRwN5aKDLErly11bWx3kr14NGljthDRubdqIPSJd8RCdyUa6bTLbe5LI0iUyT10B-W44avW6trFh5hVPBplOuxTI3HawtB9o/s640/Annotation+2020-03-04+132129.png" width="640" /></a></div>
<br />
<span style="font-family: "georgia" , "times new roman" , serif;">There are lots of sites that track the coronavirus, COVID-19. One of my favorites is <a href="https://www.arcgis.com/apps/opsdashboard/index.html#/bda7594740fd40299423467b48e9ecf6">the one put together by Johns Hopkins</a>. There is lots of data there, but the chart that scares me the most is buried in the bottom right corner of the site. The default view shows the actual number of cases reported from mainland China, from the rest of the world, and then, more hopefully, the number of people who have fully recovered. </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">It's a good chart but not the one that frightens me. You have to click the little tab that says "logarithmic" to get to the one that makes my hair a little more grey. If you then turn off the "Mainland China" button and the "Total Recovered" button, you get the chart that sends me running for Purel and a face mask. You can see what it looks like at the top of the page.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">It shows the number of cases worldwide outside of China. What makes it so frightening is that it is a logarithmic scale. That means that the Y-axis doesn't increase by equal steps. Instead, each increase represents a ten-fold increase in whatever you are measuring. In other words, you aren't counting 1, 2, 3. You are counting 10, 100, 1000.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">If you mouse over the yellow dots you can see the dates certain milestones were hit. For example, the world hit 100 (10 X 10) cases (plus a few) outside of China on January 29, 2020. See the picture below:</span><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFnh5jyJg_p2uc13K0S9YlqkyFRnZV9tB5n8W1f6IpFbbkqiLl6m5nk2OU0X8WJJMvpCk2D44D1Fmz41TnueFwhJmlb1KtDR0uk0oJpVwG5cO09L4ivXEe5iBARsQvHQ8zHH8a0UDn2lg/s1600/Annotation+2020-03-04+131943.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="345" data-original-width="482" height="458" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFnh5jyJg_p2uc13K0S9YlqkyFRnZV9tB5n8W1f6IpFbbkqiLl6m5nk2OU0X8WJJMvpCk2D44D1Fmz41TnueFwhJmlb1KtDR0uk0oJpVwG5cO09L4ivXEe5iBARsQvHQ8zHH8a0UDn2lg/s640/Annotation+2020-03-04+131943.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<span style="font-family: "georgia" , "times new roman" , serif;"> About 19 days later, we hit 1000 (10 X 10 X 10) cases (See below):</span><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiKF4qep_fjftqF8DaFYnAu2oVNDyR2tzSRW7giswJXvzh2vAJ8o-_TdRPS1MeUBaOwOWPIpC_oQrvbqceclUNV-cpsscOl21JFPwjqwGbYYK19WA1zDseaYlBjTSI_AbCsaCkHSUQQCAc/s1600/Annotation+2020-03-04+132023.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="337" data-original-width="474" height="454" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiKF4qep_fjftqF8DaFYnAu2oVNDyR2tzSRW7giswJXvzh2vAJ8o-_TdRPS1MeUBaOwOWPIpC_oQrvbqceclUNV-cpsscOl21JFPwjqwGbYYK19WA1zDseaYlBjTSI_AbCsaCkHSUQQCAc/s640/Annotation+2020-03-04+132023.png" width="640" /></a></div>
<br />
<span style="font-family: "georgia" , "times new roman" , serif;">Then, only 13 days after that, we hit 10,000 cases (10 X 10 X 10 X 10):</span><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9vRbL3CBmeAc_rj9AwWZMF1xpPDnJAi05HhuGz5XUFsQc5MS8wTY66nI-iDYxz7h3puDIYYMh4Hi8CdgsvdvpPpf4ZmKB3iaP5p1C-yK22_H0mNHeneXMGV7LnJR2e_qqx0mjrbxLVk0/s1600/Annotation+2020-03-04+132058.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="339" data-original-width="476" height="454" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9vRbL3CBmeAc_rj9AwWZMF1xpPDnJAi05HhuGz5XUFsQc5MS8wTY66nI-iDYxz7h3puDIYYMh4Hi8CdgsvdvpPpf4ZmKB3iaP5p1C-yK22_H0mNHeneXMGV7LnJR2e_qqx0mjrbxLVk0/s640/Annotation+2020-03-04+132058.png" width="640" /></a></div>
<br />
<span style="font-family: "georgia" , "times new roman" , serif;">Unchecked, this implies that there will likely be 100,000 cases outside of China by about March 17, 2020 and--here's the shocker--a million cases by the end of the month. You can do the math after that.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">Unchecked. That's the operative word in the last sentence. China got to about 80,000 cases before they managed to turn the corner. To get there meant taking extreme measures (like closing down a city larger than New York).</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">It's hard for me to imagine it getting that bad, that quickly, but that's what scares me--the math don't lie.</span>Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com1tag:blogger.com,1999:blog-2569772432953120875.post-11553381960615588202020-01-02T10:42:00.000-05:002020-01-02T10:42:05.569-05:00How To Think About The Future: A Graphic Prologue<span style="font-family: Georgia, Times New Roman, serif;"><i>(Note: I have been writing bits and pieces of "How To Think About the Future" for some time now and publishing those bits and pieces here for early comments and feedback. As I have been talking to people about it, it has become clear that there is a fundamental question that needs to be answered first: </i>Why<i> learn to think about the future?</i></span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><i><br /></i></span>
<span style="font-family: Georgia, Times New Roman, serif;"><i>Most people don't really understand that thinking about the future is a skill that can be learned--and can be improved upon with practice. More importantly, if you are making strategic decisions, decisions about things that are well outside your experience, or decisions under extreme uncertainty, being skilled at thinking about the future can significantly improve the quality of those decisions. Finally, being able to think effectively about the future allows you to better communicate your thoughts to others. You don't come across as someone who "is just guessing." </i></span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><i><br /></i></span>
<span style="font-family: Georgia, Times New Roman, serif;"><i>I wanted to make this case visually (mostly just to try something new). Randall Munroe (<a href="https://xkcd.com/1732/">XKCD</a>) and Jessica Hagy (<a href="https://thisisindexed.com/author/jessicahagy/page/97/">Indexed</a>) both do it much better of course, but a tip of the hat to them for inspiring the style below. It is a very long post, but it is a quick read; just keep scrolling!</i></span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><i><br /></i></span>
<span style="font-family: Georgia, Times New Roman, serif;"><i>As always, thanks for reading! I am very interested in your thoughts on this...)</i></span><br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgyWHVCbxhwVjIY-qbc2F6Sf_rIh_8ejYqUV7y_DdZSVc4T5Mj9zbuANSwXbTe6bJnBcE7BheYNrHqdzGQAFPOeeoYADX_iZNJKw-G97_aCkMNsVQ6BGFYPUq7X-slseid7ABxAY3khlSA/s1600/Slide1.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgyWHVCbxhwVjIY-qbc2F6Sf_rIh_8ejYqUV7y_DdZSVc4T5Mj9zbuANSwXbTe6bJnBcE7BheYNrHqdzGQAFPOeeoYADX_iZNJKw-G97_aCkMNsVQ6BGFYPUq7X-slseid7ABxAY3khlSA/s640/Slide1.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXhMqYEzy2FDoojLHj4HPkznLZ8V9vJahCluPr2AlPhqsNWrO1oTN2hH5V3q2bAoJapLdrJG6nWvmHnBIrfYZkM90Kyk1FYRn2rH34o9axqoTeZ8P9vGu_4BL_MLJdFWfRd1oOIcYJtJU/s1600/Slide2.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXhMqYEzy2FDoojLHj4HPkznLZ8V9vJahCluPr2AlPhqsNWrO1oTN2hH5V3q2bAoJapLdrJG6nWvmHnBIrfYZkM90Kyk1FYRn2rH34o9axqoTeZ8P9vGu_4BL_MLJdFWfRd1oOIcYJtJU/s640/Slide2.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEcNSqZR_B5tFAp4B1aNJgSVPPQ5nXetXvpetkWFF_BhHwb8uxWWp29qaQ2AqYhREy3Mit6LLnqBRHxhiaIIO0mH7tU-FAN1z5LhNPKn76bqNrncbSiFL3_HyRba7JkDwfEyu48lI-558/s1600/Slide3.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjEcNSqZR_B5tFAp4B1aNJgSVPPQ5nXetXvpetkWFF_BhHwb8uxWWp29qaQ2AqYhREy3Mit6LLnqBRHxhiaIIO0mH7tU-FAN1z5LhNPKn76bqNrncbSiFL3_HyRba7JkDwfEyu48lI-558/s640/Slide3.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEht-lheKqPmAefxJbphT29bxmKs2dTdNgIIXJjfbN6RItxe4D5bqd8xGs1dfyGe_BuPPWzxli2c5TbKIPBFV2lucyCNaIrvXFD_Sa-DrV5uYBnUnVtgyqIM66TMHjmV40IwJfnaCccXaek/s1600/Slide4.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEht-lheKqPmAefxJbphT29bxmKs2dTdNgIIXJjfbN6RItxe4D5bqd8xGs1dfyGe_BuPPWzxli2c5TbKIPBFV2lucyCNaIrvXFD_Sa-DrV5uYBnUnVtgyqIM66TMHjmV40IwJfnaCccXaek/s640/Slide4.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFXAJgXI3etCUDUwlIQ-8Rp7qwx80ctrHg4Hj1bWbmKAeSC1kugejfE_bbGDoANJch7RrwPOdHyn44UEpY21jJhjYIrDitrdmTH7qBTxIGQaXEB29BpzClc8nfQ2hxU3w1E7qusPJKGKI/s1600/Slide5.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFXAJgXI3etCUDUwlIQ-8Rp7qwx80ctrHg4Hj1bWbmKAeSC1kugejfE_bbGDoANJch7RrwPOdHyn44UEpY21jJhjYIrDitrdmTH7qBTxIGQaXEB29BpzClc8nfQ2hxU3w1E7qusPJKGKI/s640/Slide5.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjoLTMTBkENVD84_in2clD4UU-gMNeskp3qYjS331CYxF8v-brc2Sh6IYQ5GdJUdPgFvjwjqnsteZlxcahNCMemDwv7DzATmF4RcOSvtgevgNBcfZ7o68Tqw29FtME2Djm82HZDZMDdOQw/s1600/Slide6.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjoLTMTBkENVD84_in2clD4UU-gMNeskp3qYjS331CYxF8v-brc2Sh6IYQ5GdJUdPgFvjwjqnsteZlxcahNCMemDwv7DzATmF4RcOSvtgevgNBcfZ7o68Tqw29FtME2Djm82HZDZMDdOQw/s640/Slide6.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXSn_UsbDq0w0tRe-KSzaDcaNCyjSVHq3ggjxJpF78minR2XPZULMUT7z8-uFbntL6Q7BbzJ_IVm3shMLS8Oa6Q-QB_BIQaCdzWI1GNsKmeNkp2bEDLe-MQpvSSpZoDFIAjPmdf_-mVBM/s1600/Slide7.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXSn_UsbDq0w0tRe-KSzaDcaNCyjSVHq3ggjxJpF78minR2XPZULMUT7z8-uFbntL6Q7BbzJ_IVm3shMLS8Oa6Q-QB_BIQaCdzWI1GNsKmeNkp2bEDLe-MQpvSSpZoDFIAjPmdf_-mVBM/s640/Slide7.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBXVbclTjucwiHYJ0TswxwF5Suxc_dSw27m3x5rWrSZOxHogAQ0GPDy0YOSK4xJK3_F70WU2lu1F76jjIUjpYRPx97zj0qNhJQnlCAGZm9B_fpJToZoZidU_ahoii5gClpfii8mYCALHs/s1600/Slide8.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBXVbclTjucwiHYJ0TswxwF5Suxc_dSw27m3x5rWrSZOxHogAQ0GPDy0YOSK4xJK3_F70WU2lu1F76jjIUjpYRPx97zj0qNhJQnlCAGZm9B_fpJToZoZidU_ahoii5gClpfii8mYCALHs/s640/Slide8.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhle1g5pGqGwOklf-rlD3nIePlW2uZyWmrycVMMdNXh9U-cniASxH-2nfX1HVJOKqbbQ0L5_paNtzajRv8AED4o79CT1O7nmlug_jKW_VQk4c1VuZ5018hT0rxXZ37eAcapupBT7umFyPY/s1600/Slide9.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhle1g5pGqGwOklf-rlD3nIePlW2uZyWmrycVMMdNXh9U-cniASxH-2nfX1HVJOKqbbQ0L5_paNtzajRv8AED4o79CT1O7nmlug_jKW_VQk4c1VuZ5018hT0rxXZ37eAcapupBT7umFyPY/s640/Slide9.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRovpovSpMZAKyw8ExRMoeByM-5Gs3cu_DY0gzHa1fzJgr5U2KyNVfYZAbEB8lvzszvS6ZM-Q-Q4Jtz9IVNq-Zo5vzmqPo5hDfUu6tioQMaeC3V630wYR9IRl3GkUvblIr2AVDJ5GfOsQ/s1600/Slide10.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgRovpovSpMZAKyw8ExRMoeByM-5Gs3cu_DY0gzHa1fzJgr5U2KyNVfYZAbEB8lvzszvS6ZM-Q-Q4Jtz9IVNq-Zo5vzmqPo5hDfUu6tioQMaeC3V630wYR9IRl3GkUvblIr2AVDJ5GfOsQ/s640/Slide10.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgsDkeNDdRrjNdNu34pZCvbJutVLg5BhN_IcP3yx-7LyEgpo2LZ4rzwGuq8B6y2Mex48DGpUTu_wemHYdrM3IRgcGxnDbtUb2HYKUSbY67TdBGyrshDiUoVqDI3K5mGRFAkrSHVN3eEC8o/s1600/Slide11.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgsDkeNDdRrjNdNu34pZCvbJutVLg5BhN_IcP3yx-7LyEgpo2LZ4rzwGuq8B6y2Mex48DGpUTu_wemHYdrM3IRgcGxnDbtUb2HYKUSbY67TdBGyrshDiUoVqDI3K5mGRFAkrSHVN3eEC8o/s640/Slide11.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_NAuWiXLjHKEKL9zPrYL7o7_KZ6Am8Vx80EZIEp5JWInPvg23F3WBe5dRIID5_BWQA76z_8qBSFtIMhKMHzRM_9oMRCKV8h-cHFPgB1Z2CPGe-ZJmN2KhWV25UPIpTT5gJCzi0Mi_-hA/s1600/Slide12.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_NAuWiXLjHKEKL9zPrYL7o7_KZ6Am8Vx80EZIEp5JWInPvg23F3WBe5dRIID5_BWQA76z_8qBSFtIMhKMHzRM_9oMRCKV8h-cHFPgB1Z2CPGe-ZJmN2KhWV25UPIpTT5gJCzi0Mi_-hA/s640/Slide12.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxx2ZjCXSR9_dXDKlQOja2-GBvRRTrqOUcw2zObl-OE74HqZ-KonDEuMpvh2Um0rtMi36DSFvnC2RTKYhXefmXmO2VI3RZnz3YhW_nNuyKtMeKhZsTXaifPrgNsON_RbVmDnssTssvF9w/s1600/Slide13.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxx2ZjCXSR9_dXDKlQOja2-GBvRRTrqOUcw2zObl-OE74HqZ-KonDEuMpvh2Um0rtMi36DSFvnC2RTKYhXefmXmO2VI3RZnz3YhW_nNuyKtMeKhZsTXaifPrgNsON_RbVmDnssTssvF9w/s640/Slide13.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZdJNgJUWXKPnECFZ-ZFZfLznZbDIG6rDWlr95SplmjD1sy2SyHzuqYd5tlAct8Mw28J7Z3Y3CoNhNtomZMB8Q2VK7YI0pQnKUi4rFMP6FR_Mp63d5rVpq2CCjC90WHhJf3C0thHqmIUE/s1600/Slide14.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZdJNgJUWXKPnECFZ-ZFZfLznZbDIG6rDWlr95SplmjD1sy2SyHzuqYd5tlAct8Mw28J7Z3Y3CoNhNtomZMB8Q2VK7YI0pQnKUi4rFMP6FR_Mp63d5rVpq2CCjC90WHhJf3C0thHqmIUE/s640/Slide14.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1Bq5a2nWdeQowBgos0tphGVfvVJeRBGC_YZJuxaT5PD1dNg07yyuxzUY0ELmRi2j7aMdnDZtSdxt8bAv0Tb5Xr-hNbX7w6rTJJCtYbHyBo7kCH1GnUJCWETyCJIsgcjnzFOXWFQEpK5k/s1600/Slide15.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1Bq5a2nWdeQowBgos0tphGVfvVJeRBGC_YZJuxaT5PD1dNg07yyuxzUY0ELmRi2j7aMdnDZtSdxt8bAv0Tb5Xr-hNbX7w6rTJJCtYbHyBo7kCH1GnUJCWETyCJIsgcjnzFOXWFQEpK5k/s640/Slide15.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi14SmVBZcUJ8lNQbiw2FdALnvhIWCI89zhux7jlYPJrLSu9Jy0_hRRQA0xYvUIAbAU015hIMJHMnuXwGPxQSE_PAo-C_IiMsOBLOTxEGpS31JaqM-E9miDLN2rf5FvzfvKHCKrXF0pb1w/s1600/Slide16.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi14SmVBZcUJ8lNQbiw2FdALnvhIWCI89zhux7jlYPJrLSu9Jy0_hRRQA0xYvUIAbAU015hIMJHMnuXwGPxQSE_PAo-C_IiMsOBLOTxEGpS31JaqM-E9miDLN2rf5FvzfvKHCKrXF0pb1w/s640/Slide16.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvEZgc4_JCQ2dWZvc50RlMRNUK6OItgvgid3cPbyvv6JnvZsLHuWbpqaH5udhaL9YkEV9muE9JCLCm213amVnHDx65H-VCHWpj16ADobpm7Q8DI2-v8WZtywW_aax0vru2TRyoqQxUhY0/s1600/Slide17.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvEZgc4_JCQ2dWZvc50RlMRNUK6OItgvgid3cPbyvv6JnvZsLHuWbpqaH5udhaL9YkEV9muE9JCLCm213amVnHDx65H-VCHWpj16ADobpm7Q8DI2-v8WZtywW_aax0vru2TRyoqQxUhY0/s640/Slide17.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGVo1hvq0RZIFKZjHKzPGW4Yw_ZKt13ixVALWhWrdiO8f04QYm901Ch7ua1QTa0It207022r8vGxteEZAr2eiPUsXMlSMMeI1QJjsTT97Ub1nzUUwFg3H6DTO3Bb-nFzA4XHPoG9inK8I/s1600/Slide18.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGVo1hvq0RZIFKZjHKzPGW4Yw_ZKt13ixVALWhWrdiO8f04QYm901Ch7ua1QTa0It207022r8vGxteEZAr2eiPUsXMlSMMeI1QJjsTT97Ub1nzUUwFg3H6DTO3Bb-nFzA4XHPoG9inK8I/s640/Slide18.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiweVC0cpEV0lVGtj7U1H4VKdJ4ZcPn84f2UmlDZPqa1ujF9zSiDlxguaWiMqxN0Vwiu-kD1Byfw0pMoSlHpS5HpXP_PDRJZDfz8f9VSIL6O-zlHYcEL4SkT2_a00HZQ_YINUHEqhIPxfk/s1600/Slide19.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiweVC0cpEV0lVGtj7U1H4VKdJ4ZcPn84f2UmlDZPqa1ujF9zSiDlxguaWiMqxN0Vwiu-kD1Byfw0pMoSlHpS5HpXP_PDRJZDfz8f9VSIL6O-zlHYcEL4SkT2_a00HZQ_YINUHEqhIPxfk/s640/Slide19.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmllwn78NVWQQFk8YWPGg6qQ-tf3suntmvxZ_bLPanXuJRO2XpCDzfSNOZLmicg2MR9q5huUbB_omgztiAINrL9UeMzqNyG7YVMIRgOCANibLShqrsFGT0FBfAuTZoioCrcYaZgrKFi4k/s1600/Slide20.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmllwn78NVWQQFk8YWPGg6qQ-tf3suntmvxZ_bLPanXuJRO2XpCDzfSNOZLmicg2MR9q5huUbB_omgztiAINrL9UeMzqNyG7YVMIRgOCANibLShqrsFGT0FBfAuTZoioCrcYaZgrKFi4k/s640/Slide20.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEic8NrBVKXzLWu_X6QQlAkUJfy8DvHNZcx2yInjjznTvhkLronu5h4_P3z45aatabGrw39VRUYLuGfe1SQUVTU4_qDAzVCFw3nQwjqdIJa2lcic84PYBZ7wPyHzorXG-OpHqJrsUmC3z-w/s1600/Slide21.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEic8NrBVKXzLWu_X6QQlAkUJfy8DvHNZcx2yInjjznTvhkLronu5h4_P3z45aatabGrw39VRUYLuGfe1SQUVTU4_qDAzVCFw3nQwjqdIJa2lcic84PYBZ7wPyHzorXG-OpHqJrsUmC3z-w/s640/Slide21.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh567Lq4G2PwzIJtZufPSeMBTCUfPYiwdLCxhkCSbDIbNsI6aeyBZ6csElcwA6Ul1GWRNzQ0MLm2p7mZozAq47gj5-9jwldFkZcycfkmF6kDFBsbVQHrIDnmlmjG0fjbDY-y1jQbKlOlH0/s1600/Slide22.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh567Lq4G2PwzIJtZufPSeMBTCUfPYiwdLCxhkCSbDIbNsI6aeyBZ6csElcwA6Ul1GWRNzQ0MLm2p7mZozAq47gj5-9jwldFkZcycfkmF6kDFBsbVQHrIDnmlmjG0fjbDY-y1jQbKlOlH0/s640/Slide22.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWUdkL_CgEa-hTZXsGgSgDCjEW66ZrP6QhbZKPzpQGsYjMfkGl3f8SvhHNW-hpk7E9nEiCWoa3BfdjwuenmyUFt4xb23BAEW_aHD8p_CLP-CZQd3XTkHIv3iLsYAyH6kL2EGcUrQBy9ns/s1600/Slide23.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWUdkL_CgEa-hTZXsGgSgDCjEW66ZrP6QhbZKPzpQGsYjMfkGl3f8SvhHNW-hpk7E9nEiCWoa3BfdjwuenmyUFt4xb23BAEW_aHD8p_CLP-CZQd3XTkHIv3iLsYAyH6kL2EGcUrQBy9ns/s640/Slide23.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtJuxIAZB5te1q7ZvTwRuo81U8XnGYlUJCQWyIRIzDEQimrvCdSLEkEHJMiD8BNWu25OacteFt60qt06I8X_mQZQ15y3tbOn0yyA_KmBPcOH9UkpdWxPbQ8i0SXhmkDjDACfn5TIM4TdU/s1600/Slide24.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtJuxIAZB5te1q7ZvTwRuo81U8XnGYlUJCQWyIRIzDEQimrvCdSLEkEHJMiD8BNWu25OacteFt60qt06I8X_mQZQ15y3tbOn0yyA_KmBPcOH9UkpdWxPbQ8i0SXhmkDjDACfn5TIM4TdU/s640/Slide24.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0vtUi8pBLM68Jq0f3_i6LBzvxLLZqaecK5cjDH3qS9BC88JZTFNR0pYUSBr8193_TQFjo6N_-0Q4AvdTogh493b0xENw3SruNy0q4HYlTu-CW2eRLDy5lAnsp3cdT6aWk7KmNc3hXAys/s1600/Slide25.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0vtUi8pBLM68Jq0f3_i6LBzvxLLZqaecK5cjDH3qS9BC88JZTFNR0pYUSBr8193_TQFjo6N_-0Q4AvdTogh493b0xENw3SruNy0q4HYlTu-CW2eRLDy5lAnsp3cdT6aWk7KmNc3hXAys/s640/Slide25.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcQnkbgBJlinL0XUyOyliZvVLkkGeEf_mf9sjKaf4_Gb3M2qP-BujpnU8Whz2CtIzK7HYs1a0M5PRD5az0Lrp9Q6Yr3g51VKEfRaIUL4CX5AZyZpF3teibM8VB2QWVopiwFttJ2DLvCLw/s1600/Slide26.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcQnkbgBJlinL0XUyOyliZvVLkkGeEf_mf9sjKaf4_Gb3M2qP-BujpnU8Whz2CtIzK7HYs1a0M5PRD5az0Lrp9Q6Yr3g51VKEfRaIUL4CX5AZyZpF3teibM8VB2QWVopiwFttJ2DLvCLw/s640/Slide26.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://sourcesandmethods.blogspot.com/2019/08/how-to-think-about-future-part-3-why.html"><img border="0" data-original-height="720" data-original-width="960" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5EGUKgj7SKplqaPaMUDITg5Ggke9uTEkAzNpDn1pw2WF8TupRVzaUnNYqz-Uma8JtFtKJsTQekASSquAODVCLxRluA5rso1z4A5bVrlDXAYAH-Nqw4W4qaaERLF7Mn4JgiiJskJOE8o0/s640/Slide27.JPG" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com4tag:blogger.com,1999:blog-2569772432953120875.post-18074089108514629272019-11-18T10:00:00.000-05:002019-11-19T12:00:14.081-05:00Chapter 2: In Which The Brilliant Hypothesis Is Confounded By Damnable Data<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><span style="margin-left: auto; margin-right: auto;"><a href="https://books.google.co.uk/books?id=v6AwAAAAYAAJ&dq=editions:o0ZG0cjtFIAC&pg=PR1&hl=en#v=onepage&q&f=false"><img border="0" data-original-height="379" data-original-width="581" height="416" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhAogrfOucuEXzz9fjGc50y5xEHsq79PqxX2v8c6nmcDJI1MqSaGUTLSvBio-_y4VO54dP_wU9J3Nal0_6YpPP1dwJV2A9kkLXwVhpAe9C4BwoGBYlKy46zr4IcsRtqmaPhRBq86nCohis/s640/Capture.PNG" width="640" /></a></span></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i><a href="https://books.google.co.uk/books?id=v6AwAAAAYAAJ&dq=editions:o0ZG0cjtFIAC&pg=PR1&hl=en#v=onepage&q&f=false">"Stop it, Barsdale! You're introducing confounds into my experiment!"</a></i></td></tr>
</tbody></table>
<span style="font-family: "georgia" , "times new roman" , serif;">A little over a month ago, I <a href="https://sourcesandmethods.blogspot.com/2019/10/analytic-confidence-and-new-england.html">wrote a post</a> that asked if the <i>form</i> of an estimative statement mattered in terms of communicating its <i>content</i> with regard to analytic confidence. Specifically, I asked people to determine which of the following was "more clear" in response to the question, "Do you think the Patriots will win this week?":</span><br />
<ul>
<a href="https://www.blogger.com/blogger.g?rinli=1&pli=1&blogID=2569772432953120875" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"></a>
<li><span style="font-family: "georgia" , "times new roman" , serif;">"It's a low confidence estimate, but the Patriots are very likely to win this week."</span></li>
<li><span style="font-family: "georgia" , "times new roman" , serif;">"The Patriots are very likely to win this week. This is a low confidence estimate, however."</span></li>
</ul>
<span style="font-family: "georgia" , "times new roman" , serif;">I posted this as an <a href="https://docs.google.com/forms/d/e/1FAIpQLSdxcT7HZLG-G_nt9F7muoJh_evO0qxCHHFq3I8uAirCPcitBQ/viewform">informal survey</a> and 72 people kindly took the time to take it. Here are the results:</span><br />
<a href="https://www.blogger.com/blogger.g?rinli=1&pli=1&blogID=2569772432953120875" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"></a><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg59UAkAY2JqgcfK2TBD7wIzwQvGMEe31AJ21OvA4-VJjlcfBP0CZnZ3CJm1Cwp854taxBhgZc2RzxVgkSSTE8DPcxIDmMdaE3jJGmS6xWRKQbWmS49LtM7UzQb7b_46Yr9b-x8iEvHEX4/s1600/table.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="333" data-original-width="782" height="272" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg59UAkAY2JqgcfK2TBD7wIzwQvGMEe31AJ21OvA4-VJjlcfBP0CZnZ3CJm1Cwp854taxBhgZc2RzxVgkSSTE8DPcxIDmMdaE3jJGmS6xWRKQbWmS49LtM7UzQb7b_46Yr9b-x8iEvHEX4/s640/table.PNG" width="640" /></a></div>
<a href="https://www.blogger.com/blogger.g?rinli=1&pli=1&blogID=2569772432953120875" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"></a><br />
<br />
<span style="font-family: "georgia" , "times new roman" , serif;">At first glance, the results appear to be less than robust. The difference measured here is unlikely to be statistically significant. Even if it is, the effect size does not appear to be that large. The one thing that seems clear is that there is no clear preference.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><i><br /></i>
<i><a href="https://youtu.be/cphNpqKpKc4">Or is there?</a></i></span><br />
<div class="separator" style="clear: both; text-align: center;">
<span style="font-family: "georgia" , "times new roman" , serif;"><a href="https://www.blogger.com/blogger.g?rinli=1&pli=1&blogID=2569772432953120875" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"></a></span></div>
<span style="font-family: "georgia" , "times new roman" , serif;"><i><br /></i>
Just like every PHD candidate who ever got disappointing results from an experiment, I have spent the last several weeks trying to rationalize the results away--to find some damn lipstick and get it on this pig!</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">I think I finally found something which soothes my aching ego a bit. The fundamental assumption of these kinds of survey questions is that, in theory, both answers are equally likely. Indeed, this sort of <a href="https://en.wikipedia.org/wiki/A/B_testing">A/B testing</a> is done precisely because the asker does not know which one the client/customer/etc. will prefer.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">This assumption might not hold in this case. Statements of analytic confidence are, in my experience, rare in any kind of estimative work (although they have become a bit more common in recent years). When they are included, however, they are almost always included at the end of the estimate. Indeed, one of those who took the survey (and preferred the first statement above) commented that putting the statement of analytic confidence at the end, "is actually how it would be presented in most IC agencies, but whipsaws the reader."</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">How might the comfort of this familiarity change the results? On the one hand, I have no knowledge of who took my survey (though most of my readers seem to be at least acquainted in passing with intelligence and estimates). On the other hand, there is some pretty good evidence (and some common sense thinking) that documents the power of the <a href="https://en.wikipedia.org/wiki/Familiarity_heuristic">familiarity heuristic</a>, or our preference for the familiar over the unfamiliar. In experiments, the kind of thing that can throw your results off is known as a <a href="https://en.wikipedia.org/wiki/Confounding">confound</a>.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">More important than familiarity with where the statement of analytic confidence traditionally goes in an estimate, however, might be another rule of estimative writing and another confound: BLUF.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">Bottomline Up Front (or BLUF) style writing is a staple of virtually every course on estimative or analytic writing. "Answer the question and answer it in the first sentence" is something that is drummed into most analysts' heads from birth (or shortly thereafter). Indeed, the single most common type of comment from those that preferred the version with the statement of analytic confidence at the end was, as this one survey taker said, "You asked about the Patriots winning - the...response mentions the Patriots - the topic - within the first few words."</span><br />
<blockquote class="tr_bq">
<i><span style="font-family: "georgia" , "times new roman" , serif;">Note: <a href="https://www.inquirer.com/opinion/trump-impeachment-inquiry-transcripts-ukraine-ellipsis-20191113.html">Ellipses seem important these days</a> and the ones in the sentence above mark where I took out the word "first." I randomized the two statements in the survey so that they did not always come up in the same order. Thus, this particular responder saw the second statement above (the one with the statement of analytic confidence at the end) first.</span></i></blockquote>
<span style="font-family: "georgia" , "times new roman" , serif;">If the base rate of the two answers is not 50-50 but rather 40-60 (or worse in favor of the more familiar, more BLUFy answer) then these results could easily become very significant. It would be like winning a football game you were expected to lose by 35 points!</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">Thus, like all good dissertations, the only real conclusion I have come to is that the "topic needs more study."</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">Joking aside, it <i>is</i> an important topic. As you likely know, it is not enough to just make an estimate. It is also important to include a statement of analytic confidence. To do anything less in formal estimates is to be intellectually dishonest to whoever is making real decisions based on your analysis. I don't think that anyone would disagree that form can have a significant impact on how the content is received. The real questions are how does form impact content and to what degree? Getting at those questions in the all important area of formal estimative writing is truly something well-worth additional study.</span>Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com2tag:blogger.com,1999:blog-2569772432953120875.post-75973770233596430472019-10-01T08:31:00.000-04:002019-10-01T08:31:35.190-04:00Analytic Confidence And The New England Patriots: A Hypothesis<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIzdcWTiNDKuTTNJBh5_ePHQVEKBFxOpRrklcgAQ2VnqJJvBvu87VTISn3FaFah4s6EO1QeN8HkqDdseuFr0zxXkSwtmdGvgZwQQzsg1UUQT-u990nSVDGpBTjVa_OoNxxzCFmuV-NqYs/s1600/having+a+thought.PNG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="361" data-original-width="633" height="364" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIzdcWTiNDKuTTNJBh5_ePHQVEKBFxOpRrklcgAQ2VnqJJvBvu87VTISn3FaFah4s6EO1QeN8HkqDdseuFr0zxXkSwtmdGvgZwQQzsg1UUQT-u990nSVDGpBTjVa_OoNxxzCFmuV-NqYs/s640/having+a+thought.PNG" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i>"Don't try to stop me! I'm having a thought!" (<a href="https://archive.org/details/punch68a69lemouoft/page/n51">Image Source</a>)</i></td></tr>
</tbody></table>
<span style="font-family: "georgia" , "times new roman" , serif;">I was driving to work this morning, thinking about analytic confidence (as one does), and I had a thought. An </span><i style="font-family: Georgia, "Times New Roman", serif;">interesting </i><span style="font-family: "georgia" , "times new roman" , serif;">thought. Before I tell you what it was, you need to take the one question survey at the link below to see if my thought has any merit (I will post the results as a follow-up to this post):</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<a href="https://forms.gle/2Sa974BYFbZV8DsF8"><span style="font-family: "georgia" , "times new roman" , serif;">Which statement seems more clear to you?</span></a><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">Did you take the survey? If not, go back and take it!</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">And now? </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">OK! Thanks!</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">People are often confused by the difference between an estimate and confidence in that estimate. This confusion is driven, to a very large part, by the way the terms are often (mis)used in formal analytic writing. It is not uncommon to see someone talk about their confidence when they are really making an estimate or, less commonly, to use estimative language to convey confidence. </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">The two concepts, however, are very different. The estimate communicates what you think is likely (or unlikely) to happen in the future. Confidence speaks to the likelihood that something is mucking up the process used to establish that estimate. </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">This is where the New England Patriots come in. For example, I think it is very likely that the New England Patriots will win their next game (Note: I am using the term "very likely" here <a href="https://www.dni.gov/files/documents/ICD/ICD%20203%20Analytic%20Standards.pdf">the same way the DNI does</a>). I watch football but am by no means an expert. I don't even know who the Patriots are playing next week. I just know that they are usually a good team, and that they usually win a lot of games. So, while I still think it is very likely that the Patriots will win, my confidence in that estimate is low. The process I used for deriving that estimate was so weak, I won't be surprised to find out that they have a bye next week.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">On the other hand, it is easy to imagine a forecaster who is steeped in football lore. This hypothetical forecaster has an excellent track record derived in large part from a highly structured and efficient process for determining the odds of a victory. This forecaster might say exactly the same thing I did--the Patriots are very likely to win their next game--but, because of a superior <i>process</i>, this forecaster has high confidence in their estimate.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">It is important to convey both--the estimate itself and analytic confidence--when communicating the results of analysis to a decisionmaker. To do anything less runs the risk of the decisionmaker misinterpreting the findings or assuming things about the process that are not true. </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">It is also important to note that the "analytic confidence" mentioned here differs significantly from the far more commonly discussed notion of psychological confidence. Psychological confidence is a statement about how one "feels" and can often be caused by cognitive bias or environmental factors. There is no reliable relationship between forecasting accuracy and psychological confidence. </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">Analytic confidence, on the other hand, is based on legitimate reasons why the analysis is more likely to be correct. For example, analysis derived from facts presented by reliable sources is more likely to be correct than analysis derived from sketchy or disreputable sources. In fact, there are a number of legitimate reasons for more rather than less analytic confidence (you can <a href="https://sourcesandmethods.blogspot.com/2019/02/how-to-write-mindnumbingly-dogmatic-but.html">read about them here</a>).</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">It is, of course, possible, for analytic and psychological notions of confidence to be consistent, at least in the context of an individual forecast. I, for example, "feel" that I have no reason to be confident in my estimate about the Patriots. I also know, as I go down the list of elements responsible for legitimate analytic confidence, that very few are present. Low applies to both my psychological and analytic variants of confidence, in this case.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">That is not normal. <a href="https://en.wikipedia.org/wiki/Overconfidence_effect">Overconfidence bias</a> is typically the cause of feelings of confidence outpacing a more rational assessment of the quality of the analytic process. Underconfidence, on the other hand, is typically caused by over-thinking a problem and is <a href="https://www.pnas.org/content/111/30/10984">more common among experts than you might think</a>.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">Now to my thought. Finally.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">One of the big problems with analytic confidence is communicating it to decisionmakers in an intuitive way. Part of this problem occurs, no doubt, because of the different meanings the word "confidence" can have. Most people, when they hear the word "confidence" used in casual conversation, assume you mean the psychological kind. Adding the word "analytic" in front of "confidence" doesn't seem to help much, as most people don't really have a notion of what analytic confidence is or how it differs from the more commonly used, psychological type of confidence (They don't want to know, either. They have enough to remember).</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">The classic solution has been to ignore analytic confidence completely. This is wrong for all the reasons discussed above. Occasionally, however, analysts elect to include a statement of analytic confidence, typically at the end of the analysis. Part of this is due to the "Bottomline Up Front (BLUF)" style of writing that is common to analysis. The logic here is that the most important thing is the estimate. That becomes the bottomline and, therefore, the first thing mentioned in the paper or briefing.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">What if we flip that on its head? What if we go, at least in casual conversation, with the analytic confidence first? </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">Thus you had my two formulations:</span><br />
<ul>
<li><span style="font-family: "georgia" , "times new roman" , serif;">"It's a low confidence estimate, but the Patriots are very likely to win this week."</span></li>
<li><span style="font-family: "georgia" , "times new roman" , serif;">"The Patriots are very likely to win this week. This is a low confidence estimate, however."</span></li>
</ul>
<span style="font-family: "georgia" , "times new roman" , serif;">These two statements say exactly the same thing in terms of <i>content</i>. However, I think the <i>form</i> of the first statement better communicates what the analyst actually intends. In other words, I think the first statement establishes a slightly different context. Furthermore, I think this context will likely help the listener interpret my use of the word "confidence" correctly. That is, the first statement is better than the second at suggesting that I am using confidence as a way to highlight the process I used to derive the estimate and not just how I feel about it. </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">Another reason I think the second statement is inferior is because I think it sounds confusing to the casual listener. It is theoretically better (the bottomline is definitely up front) but, unless you are steeped in the arcana of analytic writing, it cannot be easily interpreted and could lead to confusion.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">That's the reason for the quick poll. I just wanted to see what you thought--to see, in the words of Gertrude Stein, if there was any there there. </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">Thanks and I will post what I found (and my inevitably shocked reaction to it) in a later post.</span>Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com0tag:blogger.com,1999:blog-2569772432953120875.post-40912158040516626692019-09-09T10:25:00.000-04:002019-09-09T10:25:40.520-04:00What Is A "Gray Rhino" And How Do I Tackle One? (+ That Time I Died For 7 Seconds)<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEioMETfgRcY1YJTy0LTf86V7MroPex7api7keYljK9G-jlxB2obbwO-ZSQ-brTnxBMZKr5LmhimLBIARBIZigpC7JxbI-uFeCNEokXnwKwwTssHx6zMyfkrBnp8mu8TcrMjlln_1_zhAIo/s1600/1141px-One_horned_Rhino.jpg" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="720" data-original-width="1141" height="251" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEioMETfgRcY1YJTy0LTf86V7MroPex7api7keYljK9G-jlxB2obbwO-ZSQ-brTnxBMZKr5LmhimLBIARBIZigpC7JxbI-uFeCNEokXnwKwwTssHx6zMyfkrBnp8mu8TcrMjlln_1_zhAIo/s400/1141px-One_horned_Rhino.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i><b>A perfectly ordinary gray rhino. <br />You still wouldn't want to be surprised by it</b>.<br /><span style="font-size: xx-small;">By Krish Dulal - Own work, CC BY-SA 3.0, <br />https://commons.wikimedia.org/w/index.php?curid=12888627</span></i></td></tr>
</tbody></table>
<span style="font-family: "georgia" , "times new roman" , serif;">I am taking a break today from my series on <a href="https://sourcesandmethods.blogspot.com/2019/08/how-to-think-about-future-part-3-why.html">How To Think About The Future</a> to talk about a new term I just heard: The Gray Rhino.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">A Gray Rhino is basically the opposite of a <a href="https://en.wikipedia.org/wiki/Black_swan_theory">Black Swan</a>. It is a high impact, high probability event that not enough people are paying attention to. </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">A good example of this may be the recent advances in the biological sciences. When I began <a href="https://sourcesandmethods.blogspot.com/2019/07/i-made-it.html">my current job</a>, I asked 20 of the best thinkers I know, "What is the most under-hyped, under-rated technology or trend?" I wanted to understand what I might be missing, what I should be examining more carefully.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">I was surprised at the number of people who came back and said, in one form or another, "Biology." Whether it is the prospects (and horrors) of <a href="https://www.smithsonianmag.com/science-nature/four-us-crispr-trials-editing-human-dna-for-new-medical-treatments-180973029/">gene editing</a>, <a href="https://jeccr.biomedcentral.com/articles/10.1186/s13046-019-1266-0">immunotherapies</a>, <span style="color: #2d2d2d;"><a href="https://www.smithsonianmag.com/science-nature/the-whispering-trees-180968084/">mycorrhizal networks</a>, <a href="https://www.newscientist.com/article/mg24332420-900-have-we-found-the-true-cause-of-diabetes-stroke-and-alzheimers/">bacterial manipulation</a>, our <a href="https://www.newsweek.com/2019/09/20/anxiety-medication-drugs-science-exercise-brain-1457006.html">understanding of the brain</a>, or our <a href="https://physicsworld.com/a/brain-waves-detected-in-mini-brains-grown-from-stem-cells/">ability to create whole new brains from scratch</a> (!), </span><span style="color: #2d2d2d;">advances in the biological sciences do seem poised to revolutionize our lives, yet it does not seem to get as much attention as other trends like artificial intelligence. </span></span><span style="color: #2d2d2d; font-family: "georgia" , "times new roman" , serif;">This is a Gray Rhino. Something that is almost certain to happen, will have a massive impact when it does, but is not getting the attention it deserves.</span><br />
<span style="color: #2d2d2d; font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="color: #2d2d2d; font-family: "georgia" , "times new roman" , serif;">Not everything is either a Black Swan or a Gray Rhino, however. A good example may be the hurricane, Dorian, that recently leveled the Bahamas before causing all sorts of havoc up the east coast of the US. The forecasting models did a good job of estimating where the hurricane would go, and when it would get there. Likewise, the sheer size of the thing communicated just how devastating it was going to be. While there are always people who cannot afford to leave the path of a hurricane (or have nowhere to go) or those foolish few who choose to ride it out for the hell of it, most people gave the storm the attention it deserved and did what they could to take appropriate precautions.</span><br />
<span style="color: #2d2d2d; font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="color: #2d2d2d; font-family: "georgia" , "times new roman" , serif;">As I think about the problem of how to deal with true Gray Rhinos, though, it seems to me that this is not primarily a problem of collection or analysis. Researchers have enough info in these situations, and they understand it well enough, at least, to raise the issue(s).</span><br />
<span style="color: #2d2d2d; font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="color: #2d2d2d; font-family: "georgia" , "times new roman" , serif;">It appears to me to be, instead, a problem of production or, more accurately, communication. Specifically, I think it is related to the <a href="https://www.sicotests.com/psyarticle.asp?id=136">Confidence Heuristic</a>. A heuristic is a fancy word for a rule of thumb but a rule of thumb with a slight difference. A rule of thumb is often learned (see the video below for an example). </span><br />
<br />
<iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/7LUUk6wVNrY" width="560"></iframe>
<span style="color: #2d2d2d; font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="color: #2d2d2d; font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="color: #2d2d2d; font-family: "georgia" , "times new roman" , serif;">A heuristic, on the other hand, has developed over evolutionary time scales and is hardwired into the architecture of the brain. The <a href="https://www.le.ac.uk/pc/bdp5/Cari's%20Thesis.pdf">Confidence Heuristic</a> says that, all other things being equal, we tend to accept the logic/reasoning/forecasts of other people who are confident in their logic/reasoning/forecasts. We are biologically predisposed to believe those who are confident in their own beliefs. What is more important is that <a href="https://www.overcomingbias.com/2009/08/the-confidence-heuristic.html">studies have shown that this is not necessarily a bad rule</a>. People who are genuinely confident are often right. </span><br />
<span style="color: #2d2d2d; font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="color: #2d2d2d; font-family: "georgia" , "times new roman" , serif;">For example, I remember the afternoon I died for seven seconds (It was less dramatic than that sounded...). Fortunately, I was in one of the best possible places to die for a brief period of time--a hospital. I had suffered several dizzy spells the day before and had been admitted for observation and had been hooked up to a portable EKG. When my heart stopped due to <a href="https://www.mayoclinic.org/diseases-conditions/sick-sinus-syndrome/symptoms-causes/syc-20377554">sick sinus syndrome</a>, the docs were able to see exactly what had happened. Shortly after I came around, a cardio surgeon (who I had never met) walked in with the readout, showed it to me, and said, "This buys you a pacemaker."</span><br />
<span style="color: #2d2d2d; font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="color: #2d2d2d; font-family: "georgia" , "times new roman" , serif;">As they wheeled me to OR, I remember asking the doctor, "How many of these have you done?" She said, with absolute confidence, "Hundreds," and then she looked me dead in the eye and told me, "This is a piece of cake."</span><br />
<span style="color: #2d2d2d; font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="color: #2d2d2d; font-family: "georgia" , "times new roman" , serif;">Her confidence in her skills was infectious. I believed her, and because I did, I went into surgery with no worries and came out of it successfully. She was correct to be confident as well. She had, in fact, done hundreds of these surgeries, and for the last five years, this little piece of biotech (with its eight year battery!) has kept me alive without any real issues. </span><br />
<span style="color: #2d2d2d; font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="color: #2d2d2d; font-family: "georgia" , "times new roman" , serif;">Politicians, TV hucksters, and other con artists, on the other hand, may not know about the Confidence Heuristic but they sure know how to use it! Speaking confidently and in absolute rather than nuanced terms is the hallmark of almost every political speech and all of the hours of editorial commentary masquerading as news shows. Nuance is used to cast doubt on the other side's position while confidence is required to promote your own position. </span><br />
<blockquote class="tr_bq">
<span style="color: #2d2d2d; font-family: "georgia" , "times new roman" , serif;"><i>(Note: This, coupled with <a href="https://en.wikipedia.org/wiki/Confirmation_bias">Confirmation Bias</a> and the <a href="https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect">Dunning-Kruger Effect</a>, explains much of the internet.)</i></span></blockquote>
<span style="font-family: "georgia" , "times new roman" , serif;">In other words, Gray Rhinos likely exists because of the way Gray Rhino communities of interest choose to talk about Gray Rhinos. Measured tones, nuanced forecasts, and managed expectations are the language of science and (much of) academia. Hyperbole, bold predictions, and showmanship generate the buzz, however. </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">What to do if you find yourself working on a Gray Rhino problem? Hiring a frontman to hype your rhino is likely excessive and can get you into real trouble (See <a href="https://www.businessinsider.com/the-history-of-silicon-valley-unicorn-theranos-and-ceo-elizabeth-holmes-2018-5">Theranos</a> and <a href="https://techcrunch.com/2019/09/08/would-we-miss-the-media-lab-if-it-were-gone/">MIT Media Lab</a> for a few cautionary tales). That said, developing a relationship with the press, being able to explain your research in layman's terms, and celebrating the genuine "wins" in your field as they come along, seems to make sense.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">Finally, if you do decide to go the frontman route (and remember, I don't recommend it), at least get a guy like this:</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span><iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/jdWO9ozir1Y" width="560"></iframe>Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com3tag:blogger.com,1999:blog-2569772432953120875.post-7694869531898750362019-08-26T08:17:00.000-04:002019-08-26T08:17:58.318-04:00How To Think About The Future (Part 3--Why Are Questions About Things Outside Your Control So Difficult?)<span style="font-family: "georgia" , "times new roman" , serif; font-size: 16px;"><i>I am writing a series of posts about how to think about the future. In case you missed the first two parts, you can find them here:</i></span><br />
<span style="font-family: "georgia" , "times new roman" , serif; font-size: 16px;"><i><br /></i></span><span style="font-family: "georgia" , "times new roman" , serif; font-size: 16px;"><i><a href="https://sourcesandmethods.blogspot.com/2019/07/how-to-think-about-future-part-1.html" style="color: #b4445c;">Part 1--Questions About Questions</a></i></span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><i><a href="https://sourcesandmethods.blogspot.com/2019/08/how-to-think-about-future-part-2-what.html">Part 2--What Do You Control</a></i></span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><i><br /></i></span><span style="font-size: 16px;"></span><span style="font-family: "georgia" , "times new roman" , serif; font-size: 16px;"><i>These posts represent my own views and do not represent the official policy or positions of the <a href="https://www.armywarcollege.edu/" style="color: #b4445c;">US Army or the War College</a>, where I currently work.</i></span><br />
<span style="font-family: "georgia" , "times new roman" , serif; font-size: 16px;"><i><br /></i></span>
<span style="font-family: "georgia" , "times new roman" , serif; font-size: 16px;"><i>*******************</i></span><br />
<span style="font-family: "georgia" , "times new roman" , serif; font-size: 16px;"><i><br /></i></span>
<span style="font-size: 16px;"><span style="font-family: "georgia" , "times new roman" , serif;">Former Director of the CIA, Mike Hayden, <a href="https://www.usnews.com/news/articles/2008/05/16/cia-chief-claims-progress-with-intelligence-reforms">likes to tell this story</a>:</span></span><br />
<blockquote class="tr_bq">
<span style="font-family: "georgia" , "times new roman" , serif;">"Some months ago, I met with a small group of investment bankers and one of them asked me, 'On a scale of 1 to 10, how good is our intelligence today?'" recalled Hayden. "I said the first thing to understand is that anything above 7 isn't on our scale. If we're at 8, 9, or 10, we're not in the realm of intelligence—<i>no one is asking us the questions that can yield such confidence</i>. We only get the hard sliders on the corner of the plate. Our profession deals with subjects that are inherently ambiguous, and often deliberately hidden. Even when we're at the top of our game, we can offer policymakers insight, we can provide context, and we can give them a clearer picture of the issue at hand, but we cannot claim certainty for our judgments." <i>(Italics mine)</i></span></blockquote>
<span style="font-family: "georgia" , "times new roman" , serif;">I think it is important to note that the main reason Director Hayden cited for the Agency's "batting average" was not politics or funding or even a hostile operating environment. No. The #1 reason was the difficulty of the questions. </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">Understanding why some questions are more difficult than others is incredibly important. Difficult questions typically demand more resources--and have more consequences. What makes it particularly interesting is that we all have an innate sense of when a question is difficult and when it is not, but we don't really understand why. I have written about this elsewhere (<a href="https://www.tandfonline.com/doi/full/10.1080/08850607.2013.780562">here</a> and <a href="https://sourcesandmethods.blogspot.com/2012/01/what-makes-easy-question-easy-daggreorg.html">here</a> and <a href="https://sourcesandmethods.blogspot.com/2011/10/rfi-should-intelligence-analysis-be.html">here</a>, for example), and may have become a bit like the man in the "What makes soup, soup?" video below...</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span><iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/Y1HVTNxwt7w" width="560"></iframe>
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span><br />
<span style="font-family: "georgia" , "times new roman" , serif;">No one, however, to my knowledge, has solved the problem of reliably categorizing questions by difficulty.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span><span style="font-family: "georgia" , "times new roman" , serif;">I have a hypothesis, however.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">I think that the AI guys might have taken a big step towards cracking the code. When <a href="https://sourcesandmethods.blogspot.com/2011/10/intelligence-is-all-about-partially.html">I first heard</a> about how AI researchers categorize AI tasks by difficulty, I thought there might be some useful thinking there. That was way back in 2011, though. As I went looking for updates for this series of posts, I got really excited. There has been a ton of good work done in this area (no surprise there), and I think that Russel and Norvig in their book,<a href="https://faculty.psau.edu.sa/filedownload/doc-7-pdf-a154ffbcec538a4161a406abf62f5b76-original.pdf"> <i>Artificial Intelligence: A Modern Approach</i></a>, may have gotten even closer to what is, essentially, a working definition of question difficulty.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span><span style="font-family: "georgia" , "times new roman" , serif;">Let me be clear here. The AI community did not set out to figure out why some questions are more difficult than others. They were looking to categorize AI tasks by difficulty. My sense, however, is that, in so doing, they have inadvertently shown a light on the more general question of question difficulty. Here is the list of eight criteria they use to categorize task environments (the interpretation of their thinking in terms of questions is mine):</span><br />
<ul>
<li><span style="font-family: "georgia" , "times new roman" , serif;"><i>Fully observable vs. partially observable</i> -- Questions about things that are hidden (or partially hidden) are more difficult than questions about things that are not.</span></li>
<li><span style="font-family: "georgia" , "times new roman" , serif;"><i>Single agent vs. multi-agent</i> -- Questions about things involving multiple people or organizations are more difficult than questions about a single person or organization.</span></li>
<li><span style="font-family: "georgia" , "times new roman" , serif;"><i>Competitive vs. cooperative</i> -- If someone is trying to stop you from getting an answer or is going to take the time to try to lead you to the wrong answer, it is a more difficult question. Questions about enemies are inherently harder to answer than questions about allies.</span></li>
<li><span style="font-family: "georgia" , "times new roman" , serif;"><i>Deterministic vs. stochastic</i> -- Is it a question about something with fairly well-defined rules (like many engineering questions) or is it a question with a large degree of uncertainty in it (like questions about the feelings of a particular audience)? How much randomness is in the environment?</span></li>
<li><span style="font-family: "georgia" , "times new roman" , serif;"><i>Episodic vs. sequential</i> -- Questions about things that happen over time are more difficult than questions about things that happen once.</span></li>
<li><span style="font-family: "georgia" , "times new roman" , serif;"><i>Static vs. dynamic</i> -- It is easier to answer questions about places where nothing moves than it is to answer questions about places where everything is moving.</span></li>
<li><span style="font-family: "georgia" , "times new roman" , serif;"><i>Discrete vs. continuous</i> -- Spaces that have boundaries, even notional or technical ones, make for easier questions than unbounded, "open world," spaces.</span></li>
<li><span style="font-family: "georgia" , "times new roman" , serif;"><i>Known vs. unknown</i> -- Questions where you don't know how anything works are much more difficult than questions where you have a pretty good sense of how things work. </span></li>
</ul>
<div>
<span style="font-family: "georgia" , "times new roman" , serif;">Why is this important to questions about the future? Two reasons. First, it is worth noting that most questions about the future, particularly those about things that are outside our control, fall at the harder rather than easier end of each of these criteria. Second, understanding the specific reasons why these questions are hard also gives clues as to how to make them easier to answer. </span></div>
<div>
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span></div>
<div>
<span style="font-family: "georgia" , "times new roman" , serif;">There is one more important reason why questions can be difficult. It doesn't come from AI research. It comes from the person (or organization) asking the question. All too often, people either don't ask the "real" question they want answered or are incredibly unclear in the way they phrase their questions. If you want some solutions to these problems, I suggest you look <a href="https://sourcesandmethods.blogspot.com/2018/10/6-things-to-think-about-while_22.html">here</a>, <a href="https://sourcesandmethods.blogspot.com/2018/07/3-things-you-must-know-before-you.html">here</a> and <a href="https://sourcesandmethods.blogspot.com/2018/07/farengar-secret-fire-has-quest-for-you.html">here</a>. </span></div>
<div>
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span></div>
<div>
<span style="font-family: "georgia" , "times new roman" , serif;">I was a big kid who grew up in a small town. I only played Little League ball one year, but I had a .700 batting average. Even when I was at my best physical condition as an adult, however, I doubt that I could hit a foul tip off a major league pitcher. Hayden is right. Meaningful questions about things outside your control are Major League questions, hard sliders on the corner of the plate. Understanding that, and understanding what makes these questions so challenging, is a necessary precondition to taking the next step--answering them.</span></div>
<div>
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span></div>
<div>
<span style="font-family: "georgia" , "times new roman" , serif;"><i>Next: How Should We Think About Answers? </i> </span></div>
Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com0tag:blogger.com,1999:blog-2569772432953120875.post-51466131793859851582019-08-16T07:54:00.000-04:002019-08-16T07:54:10.558-04:00How To Think About The Future (Part 2 - What Do You Control?)<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><span style="margin-left: auto; margin-right: auto;"><a href="https://mm.tt/1302065182?t=FtQWj502LV"><img border="0" data-original-height="574" data-original-width="1046" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg61V7eX5gSrgLNBOM1ldTX-hbgC4vCUSuvECfrsb6UNWn7SYG2B1Wl1wmeKPowcr7f9yd-jE2xdRkRGoCqkdwxe7AjM3lC9hPsIpv9lkzRHlOoLXKjGiRjmMoS30Mt-OJb70xmVgEfAbU/s1600/How+to+think+about+the+Future+COntrol.PNG" /></a></span></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i><a href="https://mm.tt/1302065182?t=FtQWj502LV">Click on the image above to see the full mindmap.</a></i></td></tr>
</tbody></table>
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;"><i>I am writing a series of posts about how to think about the future. In case you missed Part 1, you can find it here:</i></span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><i><br /></i></span>
<span style="font-family: "georgia" , "times new roman" , serif;"><i><a href="https://sourcesandmethods.blogspot.com/2019/07/how-to-think-about-future-part-1.html">How To Think About The Future (Part 1 -- Questions About Questions)</a></i></span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><i><br /></i></span>
<span style="font-family: "georgia" , "times new roman" , serif;"><i>These posts represent my own views and do not represent the official policy or positions of the <a href="https://www.armywarcollege.edu/">US Army or the War College</a>, where I currently work.</i></span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><i><br /></i></span>
<span style="font-family: "georgia" , "times new roman" , serif;"><i>****************</i></span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">The great Stoic philosopher Epictetus <a href="http://classics.mit.edu/Epictetus/epicench.html">wrote</a>, </span><br />
<blockquote class="tr_bq">
<span style="font-family: "georgia" , "times new roman" , serif;">"Work, therefore to be able to say to every harsh appearance, 'You are but an appearance, and not absolutely the thing you appear to be.' And then examine it by those rules which you have, and first, and chiefly, by this: <i>whether it concerns the things which are in our own control, or those which are not; and, if it concerns anything not in our control, be prepared to say that it is nothing to you</i>." (Italics mine)</span></blockquote>
<span style="font-family: "georgia" , "times new roman" , serif;">There are good reasons to focus on questions about things you control. Things you control you can understand or, at least, the data required to understand them is much easier to get. Things you control you can also change (or change more easily). Finally, you only get credit for the things you do with the things you control. Few people get credit for just watching. </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">Whole disciplines have been built around improving what you do with what you control. <a href="https://en.wikipedia.org/wiki/Master_of_Business_Administration">MBA</a> and <a href="https://en.wikipedia.org/wiki/Operations_research">Operations Research</a> programs are both good examples of fields of study that focus mostly on improving decisions about how you use the resources under your control. Indeed, focusing on the things you control is at the center of <a href="https://www.effectuation.org/?page_id=207">effectual reasoning</a>, an exciting new take on entrepreneurship and innovation (for example, the entire crowdfunding/startup <a href="https://www.mercyhurst.edu/news/mercyhurst-turns-over-quickstarter-idea-fund-erie">Quickstarter Project</a> was built on the effectuation principles and are the reason it was as successful as it was).</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">On the other hand, another great thinker from the ancient world once wrote,</span><br />
<blockquote class="tr_bq">
<span style="font-family: "georgia" , "times new roman" , serif;">"If you know the enemy and know yourself, you need not fear the result of a hundred battles." Sun Tzu, </span><a href="http://classics.mit.edu/Tzu/artwar.html" style="font-family: Georgia, "Times New Roman", serif;">The Art Of War</a></blockquote>
<span style="font-family: "georgia" , "times new roman" , serif;">Sun Tzu went on to outline the exact impact of not thinking about things you don't control: </span><br />
<blockquote class="tr_bq">
<span style="font-family: "georgia" , "times new roman" , serif;">"If you know yourself but not the enemy, for every victory gained you will also suffer a defeat." </span></blockquote>
<span style="font-family: "georgia" , "times new roman" , serif;">Things outside of your control are much more squishy than things under your control. The data is often incomplete, and what is there is often unclear. It is pretty normal for the info to be, <a href="https://www.clausewitz.com/readings/OnWar1873/BK1ch06.html">as Clausewitz would say</a>, "of doubtful character," and it is rarely structured in nice neat rows with data points helpfully organized with labelled columns. Finally, in an adversarial environment at least, you have to assume that at least some of the info you do have is deceptive--that it has been put there intentionally by your enemy or competitor to put you off the track.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">People frequently run from questions about things that are outside of their control. The nature of the info available can often make these kinds of questions seem unresolvable, that no amount of thinking can lead to any greater clarity.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">This is a mistake. </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">Inevitably, in order to move forward with the things you do control, you have to come to some conclusions about the things you do not control. A country's military looks very different if it expects the enemy to attack by sea vs. by land. A company's marketing plan looks very different if it thinks its competitor will be first to market with a new type of product or if it will not. Your negotiating strategy with a potential buyer of your house depends very much on whether you think the market in your area is hot or not.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">The US military has a saying: "Intelligence leads operations." This is a shorthand way of driving home the point that your understanding of your environment, of what is happening around you, of the things outside of your control, determines what you do with the things under your control. Whether you do this analysis in a structured, formal way or just go with your gut instinct, you always come to conclusions about your environment, about the things outside your control, before you act. </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;"><i>Since you are going to do it anyway, wouldn't it be nice if there were some skills and tools you could learn to do it better?</i> It turns out that there are. The last 20-30 years has seen an explosion in research about how to better understand the future for those things outside of our control.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">More importantly, learning these skills and tools can probably help you understand things under your control better as well. Things under your control often come with the same kinds of squishy data normally associated with things outside your control. The opposite is much less likely to be true. </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">Much of the rest of this series will focus on these tools and thinking skills, but first, we need to dig more deeply into the nature of the questions we ask about things outside our control and precisely why those questions are so difficult to answer.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;"><i>(Next: Why Are Questions About Things Outside Your Control So Difficult?)</i></span>Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com4tag:blogger.com,1999:blog-2569772432953120875.post-16031438286955208022019-07-30T08:59:00.000-04:002019-07-30T08:59:10.903-04:00How To Think About The Future (Part 1 -- Questions About Questions)<span style="font-family: "georgia" , "times new roman" , serif;">We don't think about the future; we worry about it.</span><br />
<div class="separator" style="clear: both; text-align: center;">
<span style="font-family: "georgia" , "times new roman" , serif;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXtDHbbKMy7AFBEagsLp6PbWIXK1a9CmcyTcyM71f6ymc66IfDNMGckswVfYw4YLjJOKxWygwvyWo8Oggg89kn4DhTdtuTE9UsCwr-9wK3Z2KoSU1_X4J2HDlyuO9sZSgAHQxNFAnamhQ/s1600/How_To_Think_About_The_Future.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="744" data-original-width="1272" height="233" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXtDHbbKMy7AFBEagsLp6PbWIXK1a9CmcyTcyM71f6ymc66IfDNMGckswVfYw4YLjJOKxWygwvyWo8Oggg89kn4DhTdtuTE9UsCwr-9wK3Z2KoSU1_X4J2HDlyuO9sZSgAHQxNFAnamhQ/s400/How_To_Think_About_The_Future.png" width="400" /></a></span></div>
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">Whether it's killer robots or social media or zero-day exploits, we love to rub our preferred, future-infused worry stone between our thumb and finger until it is either a thing of shining beauty or the death of us all (and sometimes both). </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">This is not a useful approach.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">Worry is the antithesis of thinking. Worry is all about jumping to the first and usually the worst possible conclusion. It induces stress. It narrows your focus. It shuts down the very faculties you need to think through a problem. Worry starts with answers; thinking begins with questions.</span><br />
<br />
<b><span style="font-family: "arial" , "helvetica" , sans-serif;">What Are Your Questions?</span></b><br />
<blockquote class="tr_bq">
<span style="font-family: "georgia" , "times new roman" , serif;"><i>“A prudent question is one-half of wisdom.”</i> – <a href="https://en.wikipedia.org/wiki/Francis_Bacon">Francis Bacon</a></span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><i>"The art of proposing a question must be held of higher value than solving it.”</i> – <a href="https://en.wikipedia.org/wiki/Georg_Cantor">Georg Cantor</a></span><span style="font-family: "georgia" , "times new roman" , serif;"></span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><i>“If you do not know how to ask the right question, you discover nothing.”</i> – <a href="https://en.wikipedia.org/wiki/W._Edwards_Deming">W. Edwards Deming</a></span></blockquote>
<span style="font-family: "georgia" , "times new roman" , serif;">Given the importance of questions (and <a href="https://hbr.org/2018/05/the-surprising-power-of-questions">of asking the "right" ones</a>), you would think that there would be more literature on the subject. In fact, the question of questions is, in my experience, one of the great understudied areas. A few years ago, <a href="https://www.tandfonline.com/doi/full/10.1080/08850607.2013.780562">Brian Manning and I took a stab at it</a> and only managed to uncover how little we really know about how to think about, create, and evaluate questions.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">For purposes of thinking about the future, however, I start with two broad categories to consider: Speculative questions and meaningful questions. </span><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1xdUzuIjvuLOhpIoCUO2RGoWg9RHRAFSrnYS2krnnjtcT35LBW8TefYCg2OYTepeTVoQ2iV1nRhY6sumd9cbiXrZVm8hpAhTfMVTkQ2qsrKUHFLQwbN4oX-qrHHwmM22FFK91FP_jHB8/s1600/TAF+questions.PNG" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="484" data-original-width="1021" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1xdUzuIjvuLOhpIoCUO2RGoWg9RHRAFSrnYS2krnnjtcT35LBW8TefYCg2OYTepeTVoQ2iV1nRhY6sumd9cbiXrZVm8hpAhTfMVTkQ2qsrKUHFLQwbN4oX-qrHHwmM22FFK91FP_jHB8/s640/TAF+questions.PNG" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<span style="font-family: "georgia" , "times new roman" , serif;">There is nothing wrong with a speculative question. Wondering about the nature of things, musing on the interconnectedness of life, and even just staring off into space for a bit are time-honored ways to come up with new ideas and new answers. We <i>should</i> question our assumptions, utilize methods like the <a href="https://www.cdc.gov/healthyyouth/evaluation/pdf/brief7.pdf">Nominal Group Technique</a> to leverage the wisdom of our collective conscious, and explore all of the other <a href="https://en.wikipedia.org/wiki/Divergent_thinking">divergent thinking</a> tools in our mental toolkits. </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">Speculation does not come without risks, however. For example, how many terrorist groups would like to strike inside the US? Let's say 10. How are they planning to do it? Bombs, guns, drones, viruses, nukes? Let's say we can come up with 10 ways they can attack. Where will they strike? One of the ten largest cities in the US? Do the math--you already have 1000 possible combinations of who, what, and where.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">How do we start to narrow this down? Without some additional thinking strategies, we likely give in to cognitive biases like <a href="https://www.pon.harvard.edu/daily/the-power-of-vivid-data/">vividness</a> and <a href="https://psychology.iresearchnet.com/social-psychology/decision-making/recency-effect/">recency</a> to narrow our focus. Other aspects of the way our minds work--like <a href="https://www.semanticscholar.org/paper/The-magical-number-seven-plus-or-minus-two%3A-some-on-Miller/4023ae0ba18eed43a97e8b8c9c8fcc9a671b7aa3">working memory limitations</a>--also get in the way. Pretty soon, our minds, which like to be fast and certain even when they should be neither, have turned our 1 in 1000 possibility into a nice, shiny, new worry stone for us to fret over (and, of course, share on Facebook).</span><br />
<br />
<span style="font-family: "georgia" , "times new roman" , serif;">Meaningful questions are questions that are important to you--important to your plans, to your (or your organization's) success or failure. Note that there are two criteria here. First, meaningful questions are important. Second, they are yours. The answers to meaningful questions almost, by definition, have consequences. The answers to these questions tend to compel decisions or, at least, further study.</span><br />
<br />
<span style="font-family: "georgia" , "times new roman" , serif;">It is entirely possible, however, to spend a lot of time on questions which are both of dubious relevance to you and are not particularly important. The Brits have a lovely word for this, <i><a href="https://medium.com/@kishfy/the-appeal-of-easy-problems-aka-bikeshedding-97a647a9cc14">bikeshedding</a>. </i>It captures our willingness to argue for hours about what color to paint the bikeshed while ignoring much harder and more consequential questions. Bikeshedding, in short, allows us to distract ourselves from our speculations and our worries and feel like we are still getting something done.</span><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_gcgms4DXkyfN4ghgIc-4gjy9QsHox6ipqo8ceAjmGtC-4ZJHXjX8U0o2RD_jHJV9ATKJnh6ThyQoa4PTcsGISMXQtNVWqgUDTZKO7Oe2sKXcG6B9xW_-h0HK8MNY2Lr7XY_P1cIKG1o/s1600/TAF+bikeshed.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="400" data-original-width="1168" height="218" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_gcgms4DXkyfN4ghgIc-4gjy9QsHox6ipqo8ceAjmGtC-4ZJHXjX8U0o2RD_jHJV9ATKJnh6ThyQoa4PTcsGISMXQtNVWqgUDTZKO7Oe2sKXcG6B9xW_-h0HK8MNY2Lr7XY_P1cIKG1o/s640/TAF+bikeshed.PNG" width="640" /></a></div>
<br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;"><i>Next: What do you control?</i></span>Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com0tag:blogger.com,1999:blog-2569772432953120875.post-68040355502812717742019-07-25T10:00:00.000-04:002019-07-25T10:09:27.769-04:00Why The Next "Age of Intelligence" Scares The Bejesus Out Of Me<span style="font-size: small;"><span style="font-family: "georgia" , "times new roman" , serif;"><span id="docs-internal-guid-b32d9792-7fff-8719-d507-f8943603274c" style="background-color: transparent; color: black; font-style: normal; font-variant: normal; font-weight: 400; text-decoration: none; vertical-align: baseline; white-space: pre-wrap;">A little over a month ago, I wrote a post titled <a href="https://sourcesandmethods.blogspot.com/2019/06/how-to-teach-2500-years-of-intelligence.html">How To Teach 2500 Years Of Intelligence History In About An Hour</a>. The goal of that post was to explain how I taught the history of intelligence to new students. Included in that article was the picture below:</span></span></span><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJhOin5d9WRRiU6bFMvx3ccxJ7Nn1cunEhO08prnuX8ytz6mYZ3ILrtC45PyI89GyBciOQppy7tCmN7I9zagW5Tfu9qPpHJyq-h8HN-9AuJpiguxk6Ux2nbgsKS2l4vh5zrHAOS7zsJlY/s1600/ages+of+intel+and+length.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="378" data-original-width="719" height="336" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJhOin5d9WRRiU6bFMvx3ccxJ7Nn1cunEhO08prnuX8ytz6mYZ3ILrtC45PyI89GyBciOQppy7tCmN7I9zagW5Tfu9qPpHJyq-h8HN-9AuJpiguxk6Ux2nbgsKS2l4vh5zrHAOS7zsJlY/s640/ages+of+intel+and+length.PNG" width="640" /></a></div>
<br />
<span style="font-family: "georgia" , "times new roman" , serif;"><span style="font-family: "georgia" , "times new roman" , serif;"><span style="font-size: small;">I am not going to cover all the details of the "Ages of Intelligence" approach again (you can see those at <a href="https://sourcesandmethods.blogspot.com/2019/06/how-to-teach-2500-years-of-intelligence.html">this link</a>), but the basic idea is that there are four pretty clear ages. In addition, I made the case that, driven by ever changing technology as well as corresponding societal changes, the length of these ages is getting logarithmicly shorter. <br />
<br />
Almost as an afterthought, I noted that the trend line formed by these ever shortening ages was approaching the X-intercept. In other words, the time between "ages" was approaching zero. In fact, I noted (glibly and mostly for effect) that we could well be in a new "Age of Intelligence" right now and not know it.<br />
<br />
When I publish a piece like the one mentioned above, I usually feel good about it for about ten minutes. After that, I start to think about all the stuff I could have said or where to go next with the topic. In this case, the next step was obvious--a little speculative thinking about what comes, well, <i>now</i>. What I saw was not pretty (and, to be frank, a little frightening).<br />
<span style="font-family: "georgia" , "times new roman" , serif;"><span style="font-family: "georgia" , "times new roman" , serif;"><span style="font-size: small;"><br /></span></span></span>
<span style="font-family: "georgia" , "times new roman" , serif;"><span style="font-family: "georgia" , "times new roman" , serif;">Looking out 10 years, I see five hypotheses (The base rate, therefore, for each is 20%). I will indicate what I think are the arguments for and against each hypothesis, and then, how I would adjust the probability from the base rate. </span></span><br />
<br />
<span style="font-family: "georgia" , "times new roman" , serif;"><span style="font-family: "georgia" , "times new roman" , serif;"><b><span style="font-family: "arial" , "helvetica" , sans-serif;">The Age of Anarchy </span></b></span> </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;">No one knows what is going on, no one knows what to do about it. Technology just keeps changing and improving at an ever increasing pace, and no one person or even one organization (no matter how large) can keep up with it. Strategic intelligence is worthless and even tactical intelligence has only limited utility.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;"><i>Arguments for:</i> This is certainly what life feels like right now for many people. <a href="https://youtu.be/bDlhXubyAtk">Dylan Moran's rant</a> probably captures this hypothesis far better than I could:</span><br />
<br />
<iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/bDlhXubyAtk" width="560"></iframe>
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;"></span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;"><i>Arguments against:</i> This is a form of the same argument that has been made against every technological advance since the Ancient Greeks (Socrates, for example, was against <i>writing</i> because it <a href="http://neamathisi.com/literacies/chapter-1-literacies-on-a-human-scale/socrates-on-the-forgetfulness-that-comes-with-writing">"</a></span><span style="font-family: "georgia" , "times new roman" , serif;"><a href="http://neamathisi.com/literacies/chapter-1-literacies-on-a-human-scale/socrates-on-the-forgetfulness-that-comes-with-writing">will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing..."</a> Replace "writing with "books" or "computers" or "cell phones" and you have another variation on this <a href="https://en.wikipedia.org/wiki/Luddite">Luddite</a> theme). In short, every age has had to adjust to the risks and rewards new technologies bring. The next age of intelligence is unlikely to be new in this respect.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;"><i>Probability: 17%</i></span><br />
<br />
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "arial" , "helvetica" , sans-serif;"><b><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "georgia" , "times new roman" , serif;">Age of Irrelevance</span></span></b></span></span></span></span></span><br />
<span style="font-family: "georgia" , "times new roman" , serif;">Artificial intelligence (AI) takes over the world. The algorithms get so good at understanding and predicting that we increasingly turn over both our intelligence production and our decisionmaking to the computers. In this hypothesis, there is still a need to <a href="https://www.brainyquote.com/quotes/sun_tzu_440736">know the enemy</a> there is just no longer a need for us to do all those <a href="https://www.brainyquote.com/quotes/sun_tzu_129855">tedious calculations in our tents</a>. The collection of intelligence information and the conduct of intelligence analysis becomes an entirely automated process.</span><br />
<br />
<span style="font-family: "georgia" , "times new roman" , serif;"><i>Arguments for:</i> Even a cursory look at the <a href="https://en.wikipedia.org/wiki/Progress_in_artificial_intelligence">Progress in Artificial Intelligence</a> article in Wikipedia suggests two things. First, an increasing number of complex activities where humans used to be the best in the world are falling victim to AI's steady march. Second, humans almost always underestimate just how quickly machines will catch up to them. Efforts by the growing number of <a href="https://www.theatlantic.com/international/archive/2018/02/china-surveillance/552203/">surveillance states</a> will only serve to increase the pace as they move their populations in the direction of the <a href="http://www.growthconsulting.frost.com/web/images.nsf/0/3D6C419B4830EF0286257C55005D54CD/$File/SCIP14V6I1_IndustryInsight_Kristan.htm">biases inherent in the programming or the data.</a> </span><br />
<br />
<span style="font-family: "georgia" , "times new roman" , serif;"><i>Arguments against:</i> AI may be the future, but not now and certainly not in the next ten years. Four polls of researchers done in 2012-13 indicated that that there was only a 50% chance of a <a href="https://en.wikipedia.org/wiki/Technological_singularity">technological singularity</a>--where a general AI is as smart as a human--by 2040-2050. <a href="https://www.gartner.com/smarterwithgartner/5-trends-emerge-in-gartner-hype-cycle-for-emerging-technologies-2018/">The technology gurus at Gartner also estimated</a> in 2018 that general artificial intelligence is just now beginning to climb the "hype cycle" of emerging technologies and is likely more than 10 years away. The odds that this hypothesis becomes reality go up after ten years, however.</span><br />
<br />
<i><span style="font-family: "georgia" , "times new roman" , serif;">Probability: 7%</span></i><br />
<br />
<b><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "georgia" , "times new roman" , serif;"><span style="font-family: "arial" , "helvetica" , sans-serif;">Age of Oligarchy</span> </span></span></b><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><span style="font-family: "georgia" , "times new roman" , serif;">Zuckerberg, Gates, Nadella, Li, Bezos, Musk, Ma--their names are already household words. Regular Joe's and Jane's (like you and me) get run over, while these savvy technogeeks rule the world.</span></span><b><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "georgia" , "times new roman" , serif;"><span style="font-family: "georgia" , "times new roman" , serif;"> </span></span></span></b><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "georgia" , "times new roman" , serif;"><span style="font-family: "georgia" , "times new roman" , serif;">If you ain't part of this new <a href="https://en.wikipedia.org/wiki/Illuminati">Illuminati</a>, you ain't $h!t.</span></span></span><b><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "georgia" , "times new roman" , serif;"> </span></span></b><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "georgia" , "times new roman" , serif;">Much like the <a href="https://sourcesandmethods.blogspot.com/2019/06/how-to-teach-2500-years-of-intelligence.html">Age of Concentration</a>, intelligence efforts will increasingly focus on these oligarchs and their businesses while traditional state and power issues take a back seat (See <a href="https://en.wikipedia.org/wiki/Snow_Crash">Snow Crash</a>)</span></span><b><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "georgia" , "times new roman" , serif;">.</span></span></b><br />
<b><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "georgia" , "times new roman" , serif;"><br /></span></span></b>
<span style="font-family: "georgia" , "times new roman" , serif;"><i><span style="font-family: "georgia" , "times new roman" , serif;">Arguments for: </span></i></span><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "georgia" , "times new roman" , serif;"><a href="http://gs.statcounter.com/search-engine-market-share">92% of all searches go through Google</a>,</span></span><b><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "georgia" , "times new roman" , serif;"> </span></span></b><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "georgia" , "times new roman" , serif;"><a href="https://www.bizjournals.com/bizwomen/news/latest-news/2019/02/amazon-expected-to-grab-47-percent-of-online-sales.html?page=all">47% of all online sales go through Amazon</a>, <a href="https://netmarketshare.com/operating-system-market-share.aspx?options=%7B%22filter%22%3A%7B%22%24and%22%3A%5B%7B%22deviceType%22%3A%7B%22%24in%22%3A%5B%22Desktop%2Flaptop%22%5D%7D%7D%5D%7D%2C%22dateLabel%22%3A%22Trend%22%2C%22attributes%22%3A%22share%22%2C%22group%22%3A%22platform%22%2C%22sort%22%3A%7B%22share%22%3A-1%7D%2C%22id%22%3A%22platformsDesktop%22%2C%22dateInterval%22%3A%22Monthly%22%2C%22dateStart%22%3A%222018-07%22%2C%22dateEnd%22%3A%222019-06%22%2C%22segments%22%3A%22-1000%22%7D">88% of all desktop and laptop computers run Windows</a>. These and other companies maintain almost monopoly-like positions within their industries. By definition, the oligarchy already exists.</span></span><br />
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "georgia" , "times new roman" , serif;"><br /></span></span>
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "georgia" , "times new roman" , serif;"><i>Arguments against:</i> Desktops and laptops may run on Windows but <a href="https://en.wikipedia.org/wiki/Linux_adoption">the internet and virtually all supercomputers--that is, the future--run on Linux based systems</a>. Browsers like <a href="https://brave.com/">Brave</a> and extensions like <a href="https://addons.mozilla.org/en-US/firefox/addon/privacy-badger17/">Privacy Badger</a> will also make it more difficult for these companies to profit from their monopoly positions. In addition, an increasing public awareness of the privacy issues associated with placing so much power in these companies with so little oversight will expand <a href="https://www.wsj.com/articles/justice-department-to-open-broad-new-antitrust-review-of-big-tech-companies-11563914235">calls for scrutiny and regulation of these businesses and their leaders</a>.</span></span><br />
<br />
<i><span style="font-family: "georgia" , "times new roman" , serif;">Probability: 27%</span></i><br />
<br />
<span style="font-family: "georgia" , "times new roman" , serif;"><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "arial" , "helvetica" , sans-serif;"><b><span style="font-family: "georgia" , "times new roman" , serif;">Age of Ubiquity</span></b></span></span></span></span><br />
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "georgia" , "times new roman" , serif;"><span style="font-family: "georgia" , "times new roman" , serif;">We start to focus on our digital literacy skills. We figure out how to spot liars and fakes and how to reward honest news and reviews. We teach this to our children. We reinforce and support good journalistic ethics and punish those who abandon these standards. We all get smart. We all become--have to become--intelligence analysts.</span></span></span><br />
<br />
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "georgia" , "times new roman" , serif;"><span style="font-family: "georgia" , "times new roman" , serif;"><i>Arguments for:</i> Millennials and Gen Z are <a href="https://www2.deloitte.com/global/en/pages/about-deloitte/articles/millennialsurvey.html">skeptical about the motives of big business</a> and are <a href="https://www.techspot.com/news/79082-facebook-rapidly-losing-millennials-us-user-base-down.html">abandoning traditional social media platforms</a> in record numbers. They are already <a href="https://link.springer.com/chapter/10.1057/9781137375940_2">digital natives</a>, unafraid of technology and well aware of its risks and rewards. These generations will either beat the system or disrupt it with new technologies.</span></span></span><br />
<br />
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "georgia" , "times new roman" , serif;"><span style="font-family: "georgia" , "times new roman" , serif;"><i>Arguments against:</i> Human nature. Hundreds of books and articles have been written in the last decade on how powerful the biases and heuristics hardwired into our brains actually are. We are programmed <a href="https://en.wikipedia.org/wiki/List_of_cognitive_biases">to seek the easy way out, to value convenience over truth, and to deceive ourselves</a>. Those who do happen to figure out how to beat the system or disrupt it are likely to hold onto that info for their own economic gain, not disperse it to the masses.</span></span></span><br />
<br />
<span style="font-family: "arial" , "helvetica" , sans-serif;"><i><span style="font-family: "georgia" , "times new roman" , serif;"><span style="font-family: "georgia" , "times new roman" , serif;">Probability: 12% </span></span></i></span><br />
<br />
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "arial" , "helvetica" , sans-serif;"><b><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "georgia" , "times new roman" , serif;">Blindside Hypothesis</span></span></b></span></span></span></span><br />
<span style="font-family: "georgia" , "times new roman" , serif;">Something else, radically different than one of approaches above, is going to happen. </span><br />
<br />
<span style="font-family: "georgia" , "times new roman" , serif;"><i>Arguments for:</i> </span><span style="font-family: "georgia" , "times new roman" , serif;"><span style="font-family: "georgia" , "times new roman" , serif;">First, this
whole darn article is premised on the idea that the "Ages of
Intelligence" approach is legit and not just a clever pedagogical trick.</span> Furthermore, while
there are lots of good, thoughtful sources regarding the future, many
of them</span><span style="font-family: "georgia" , "times new roman" , serif;"><span style="font-family: "georgia" , "times new roman" , serif;">, as you can see above, </span> contradict. Beyond that:</span><br />
</span></span></span><br />
<ul><span style="font-family: "georgia" , "times new roman" , serif;"><span style="font-size: small;">
<li><span style="font-family: "georgia" , "times new roman" , serif;">This is a complex problem, and I generated this
analysis on my own with little consultation with other experts. </span></li>
<li><span style="font-family: "georgia" , "times new roman" , serif;">Complex problems have "predictive horizons"--places beyond which we cannot see--where we are essentially saying, "There is a 50% chance of x happening, plus or minus 50%." </span></li>
<li><span style="font-family: "georgia" , "times new roman" , serif;">I have been thinking about this on and off for a few weeks but have hardly put the massive quantities of time I should to be able to make these kinds of broad assessments with any confidence. </span></li>
<li><span style="font-family: "georgia" , "times new roman" , serif;">The lightweight pro v. con form of my discussion adds only a </span><span style="font-family: "georgia" , "times new roman" , serif;"><a href="https://www.google.com/search?client=firefox-b-1-d&channel=cus&q=soupcon"><span data-dobid="hdw">soupçon</span></a> of structure to my thinking. </span></li>
<li><span style="font-family: "georgia" , "times new roman" , serif;">Finally, humans have a terrible track record of predicting disruption and I am decidedly human. </span></li>
</span></span></ul>
<span style="font-family: "georgia" , "times new roman" , serif;"><span style="font-size: small;">
<span style="font-family: "georgia" , "times new roman" , serif;">Bottomline: The
odds are good that I am missing something.</span><br />
<br />
<span style="font-family: "georgia" , "times new roman" , serif;"><i>Arguments against:</i> What? What am I missing? What reasonable hypothesis about the future, broadly defined, doesn't fall into one of the categories above? (Hint: Leave your answer in the comments!)</span><br />
<br />
<span style="font-family: "georgia" , "times new roman" , serif;"><i>Probability: 37% </i></span><br />
<br />
<span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "arial" , "helvetica" , sans-serif;"><span style="font-family: "arial" , "helvetica" , sans-serif;"><b><span style="font-family: "georgia" , "times new roman" , serif;">Why This Scares Me</span></b></span></span></span></span><br />
<span style="font-family: "georgia" , "times new roman" , serif;">Other than the rather small probability that we all wake up one morning and become the critical information collectors and analysts this most recent age seems to demand of us, there aren't any good outcomes. <i> </i>I don't really want chaos, computers or a handful of profit-motivated individuals to control my digital and, as a result, non-digital life. I also fully realize that, in some sense, this is not a new revelation. Other writers, far more eloquent and informed than I, have been making some variation of this argument for years. </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">This time, however, it is more personal. Intelligence leads operations. Understanding the world outside your organization's control drives how you use the resources under your control. My new employer is the US Army and the US Army looks very different in the next ten years depending on which of these hypotheses becomes fact. </span></span></span>Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com0tag:blogger.com,1999:blog-2569772432953120875.post-74764101863742440542019-07-22T10:00:00.000-04:002019-07-22T10:00:02.730-04:00I Made It!<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjpCnCc5oXMm7JymnUOZGULb5-nOpCvcBctQ5ml9_Yx9knHRusvaHfGJnPdK70IrVUY89n39Db-9dixqLPatfGknUo1oeaiZaYsVUSt8wdexuzl5mfBNF97tCacc2djtrjHL1PAQzwYyG4/s1600/USAWC+HQ.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="1043" data-original-width="1600" height="260" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjpCnCc5oXMm7JymnUOZGULb5-nOpCvcBctQ5ml9_Yx9knHRusvaHfGJnPdK70IrVUY89n39Db-9dixqLPatfGknUo1oeaiZaYsVUSt8wdexuzl5mfBNF97tCacc2djtrjHL1PAQzwYyG4/s400/USAWC+HQ.jpg" width="400" /></a></div>
<div>
<span style="font-family: Georgia, "Times New Roman", serif;">I started my new job as Professor of Strategic Futures at the <a href="https://www.armywarcollege.edu/">US Army War College</a>
last week. So far, it has been a fairly predictable, if seemingly
unending, series of orientations, mandatory trainings, and security
briefings. I don't mind. To paraphrase <a href="https://biblehub.com/niv/matthew/11.htm">Matthew</a>, "What did I go into the Army to see? A man running without a PT belt?"</span><br />
<br />
<span style="font-family: Georgia, "Times New Roman", serif;">What
I have been impressed with is the extraordinary depth of knowledge and
genuine collegiality of the faculty. It is an interesting
feeling to be constantly surrounded by world class experts in virtually
any domain.</span><br />
<br />
<span style="font-family: Georgia, "Times New Roman", serif;">Equally impressive is the emphasis on
innovation and experimentation. I am surrounded by an example of this
right now. I am writing this post on one of a number of open access
commercial network machines in the War College library. In the back of
the room, a professor is leading an after action review of an exercise
built around <a href="https://www.compassgames.com/military-simulations/south-china-sea.html">Compass Games' South China Sea</a>
war game (BTW, if you think it odd that the Army would have students
play a scenario which is largely naval in nature, you are missing my
point about innovation and experimentation). </span><br />
<span style="font-family: Georgia, "Times New Roman", serif;"><br /></span>
<span style="font-family: Georgia, "Times New Roman", serif;">Scattered
throughout the rest of the library are recently acquired, odd-shaped pieces of furniture
designed to create collaborative spaces, quiet spaces, and resting
spaces (among others). Forms soliciting feedback suggest that the
library is working hard to figure out what kind of spaces its patrons
want, and what kind of furniture and equipment would best support those
needs. In the very rear of the building, there is a room undergoing a
massive reconstruction. No telling what is about to go in there, but it
is clear evidence that the institution is not standing still. </span><br />
<span style="font-family: Georgia, "Times New Roman", serif;"><br /></span>
<span style="font-family: Georgia, "Times New Roman", serif;">I will continue to write here on <a href="https://sourcesandmethods.blogspot.com/">Sources and Methods</a>, of course. I also hope to get a few things published on the War College's own online journal, <a href="https://warroom.armywarcollege.edu/">The War Room</a>
(Check it out if you haven't. It's very cool). Other than that, I look
forward to pursuing some of my old lines of research and adding a few
new ones as well.</span><br />
<br />
<span style="font-family: Georgia, "Times New Roman", serif;">For those of you who want to contact
me, you can call me in my office at 717-245-4665, email me at kristan
dot j dot wheaton dot civ at mail dot mil or, as always, email me at
kris dot wheaton at gmail dot com. You can also message me on <a href="https://www.linkedin.com/in/kwheaton">LinkedIn</a>.</span><br />
</div>
Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com3tag:blogger.com,1999:blog-2569772432953120875.post-11176829516550979002019-06-24T09:22:00.000-04:002019-06-24T09:22:54.806-04:00EPIC 2014: The Best/Worst Forecast Ever Made?<span style="font-family: "georgia" , "times new roman" , serif;">The eight minute film, <a href="https://youtu.be/eUHBPuHS-7s">EPIC 2014</a>, made a huge impact on me when it was released in 2004. If you have seen it before, it's worth watching it again. If you haven't, let me set it up for you before you click the play button below. </span><br />
<div>
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span></div>
<div>
<span style="font-family: "georgia" , "times new roman" , serif;">Put together by <a href="https://www.robinsloan.com/short-bio/">Robin Sloan</a> and <a href="https://www.revealnews.org/author/matt-thompson">Matt Thompson</a> way back in 2004, EPIC 2014 talked about the media landscape in 2014 <i>as if it had already happened</i>. In other words, they invented a "Museum of Media History", and then pretended, in 2004, to look backward from 2014 as a way of exploring how they thought the media landscape would change from 2004 to 2014. Watch it now; it will all make sense when you do:</span></div>
<div>
<br />
<iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/eUHBPuHS-7s" width="600"></iframe> <br />
<span style="font-family: "georgia" , "times new roman" , serif;">In some ways, this is the worst set of predictions ever made. Almost none of the point predictions are correct. Google never merged with Amazon, Microsoft did not buy Friendster, <a href="https://www.marketwatch.com/story/new-york-times-subscriber-numbers-are-skyrocketing-in-the-trump-age-2019-02-06">The New York Times did not become a print-only publication for the elderly</a>, and Sony's e-paper is not cheaper than real paper (It <a href="https://www.sony.com/electronics/digital-paper-notepads/dpt-series">costs 700 bucks and gets an average of just 3 stars (on Sony's site!</a>)).</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">Sloan and Thompson did foresee Google's suite of online software services but did not really anticipate competition from the likes of Facebook, Twitter, LinkedIn, YouTube or any of a host of other social media services that have come to dominate the last 15 years.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">None of that seemed particularly important to me, however. It felt like just a clever way to get my attention (and it worked!). The important part of the piece was summed up near the end instead. EPIC, Sloan and Thompson's name for the monopolized media landscape they saw by 2014, is: </span><br />
<blockquote class="tr_bq">
<span style="font-family: "georgia" , "times new roman" , serif;">"...at its best and edited for the savviest readers, a summary of the world—deeper, broader and more nuanced than anything ever available before ... but at its worst, and for too many, EPIC is merely a collection of trivia, much of it untrue, all of it narrow, shallow, and sensational. But EPIC is what we wanted, it is what we chose, and its commercial success preempted any discussions of media and democracy or journalistic ethics."</span></blockquote>
<span style="font-family: "georgia" , "times new roman" , serif;">Switch out the word "EPIC" with the word "internet" and that still seems to me to be one of the best long-range forecasts I've ever seen. You could throw that paragraph up on almost any slide describing the state of the media landscape today, and most of the audience would likely agree. The fact that Sloan and Thompson were able to see it coming way back in 2004 deserves mad props.</span><br />
<br />
<span style="font-family: "georgia" , "times new roman" , serif;">It also causes me to wonder about the generalizability of the lessons learned from forecasting studies based on resolvable questions. Resolvable questions (like "Will Google and Amazon merge by December 31, 2014?") are fairly easy to study (easier, anyway). Questions which don't resolve to binary, yes/no, answers (like "What will the media landscape look like in 2014?") are much harder to study but also seem to be more important. </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">We have learned a lot about forecasting and forecasting ability over the last 15 years by studying how people answer resolvable questions. That's good. We haven't done that before and we should have. </span><br />
<br />
<span style="font-family: "georgia" , "times new roman" , serif;">Sloan and Thompson seemed to be doing something else, however. They weren't just adding up the results of a bunch of resolvable questions to see deeper into the future. There seems to me to be a different process involved. I'm not sure how to define it. I am not even sure how to study it. I do think, that, until we can, we should be hesitant to over-apply the results of any study to real world analysis and analytic processes.</span></div>
Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com0tag:blogger.com,1999:blog-2569772432953120875.post-62945843240507814952019-06-18T11:36:00.001-04:002019-06-18T11:36:08.568-04:00What Is #COOLINT?<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhq5MJz73PSTNsSJQLozLZGIQVMtTMNGhGkAjxAPDaxIeWvPKoHRBVBMhWsm1Sc1BEZzbBsWsYlHCZSd7KQTAwG0SUkj_IzTMqdwVW-jQlQY6MtS5lhJEANI5dvnQ8NR3_jalo6SRPBsaQ/s1600/apollo+11.PNG" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="339" data-original-width="561" height="241" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhq5MJz73PSTNsSJQLozLZGIQVMtTMNGhGkAjxAPDaxIeWvPKoHRBVBMhWsm1Sc1BEZzbBsWsYlHCZSd7KQTAwG0SUkj_IzTMqdwVW-jQlQY6MtS5lhJEANI5dvnQ8NR3_jalo6SRPBsaQ/s400/apollo+11.PNG" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i><a href="https://apolloinrealtime.org/11/">Apollo 11 in Real-Time</a> is the very definition of cool.</i></td></tr>
</tbody></table>
<span style="font-family: Georgia, Times New Roman, serif;">HUMINT, SIGINT, OSINT--the specialized language of intelligence is all ate up with acronyms for the various collection disciplines. Intel wags have (for at least the last 40 years I have been doing this stuff) come up with a variety of clever (?) plays on this formulation. For example: RUMINT = Intelligence founded on rumors alone. DUMBINT = Intelligence too stupid to believe.</span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><br /></span>
<span style="font-family: Georgia, Times New Roman, serif;">COOLINT is usually reserved for something that is, well, cool but might not be particularly relevant to the question at hand. You want to show COOLINT to other people. You KNOW they will be interested in it. It's the clickbait of the intel world.</span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><br /></span>
<span style="font-family: Georgia, Times New Roman, serif;">A great example of COOLINT is the <a href="https://apolloinrealtime.org/11/">Apollo 11 In Real-time</a> website (the mobile version is OK but you will want to look at it on your PC or MAC. Trust me). In fact, I used the hashtag "#COOLINT" when I tweeted out this site this morning. The guys who put this amazing site together have mashed up all of the audio and video, all of the commentary, and all of the pictures into a single website that allows you to follow along with the mission from T - 1 minute to splashdown. It doesn't really have anything to do with intelligence, but, to a spacegeek like me, you find the <a href="https://apolloinrealtime.org/11/">Apollo 11 in Real-time</a> website next to the word "cool" in the dictionary.</span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><br /></span>
<span style="font-family: Georgia, Times New Roman, serif;">I intend to argue here, however, that there is a more formal definition of COOLINT, one that is actually useful in analytic reporting. To do this, I want to first briefly explore the concepts of "relevant" and "interesting"</span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><br /></span>
<span style="font-family: Georgia, Times New Roman, serif;">One of the hallmarks of good intelligence analysis is that it be relevant to the decisionmaker(s) being supported. <a href="https://www.dni.gov/files/documents/ICD/ICD%20203%20Analytic%20Standards.pdf">ICD 203</a> makes this mandatory for all US national security intel analysts but, even without the regulation, relevance has long been the standard in intel tradecraft.</span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><br /></span>
<span style="font-family: Georgia, Times New Roman, serif;">"Interesting" is a term which gets significantly less attention in intel circles. There is no requirement that good intel be interesting. It is ridiculous to think that good intel should meet the same standards as a good action movie or even a good documentary. That said, if I have two pieces of information that convey the same basic, relevant facts and one is "interesting" and other is not </span><span style="font-family: Georgia, "Times New Roman", serif;">(for example, 500 words of statistical text vs. one chart)</span><span style="font-family: Georgia, Times New Roman, serif;">, I would be a bit of a fool not to use the interesting one. Intel analysts don't just have a responsibility to perform the analysis, they also have a responsibility to communicate it to the decisionmaker they are supporting. "Interesting" is clearly less important than "relevant" but, in order to communicate the analysis effectively, something that has to be considered.</span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><br /></span>
<span style="font-family: Georgia, Times New Roman, serif;">With all this in mind, it is possible to construct a matrix to help an analyst think about the kinds of information they have available and where it all should go in their analytic reports or briefings:</span><br />
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOclZcKKacfW-nUlsmBHORjhJ-P36CuO2gHagD_CHYWYmeIje_0lWgZe8ZM3agFMKfkeQoA4bNfwL7XbGfb9ZU7Ihtz9QHaT6x3WLYQIiOsPj4-E5SbRLpRt6NRMDYTqg4B22pY_FtYaE/s1600/interesting+vs+relevant.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="484" data-original-width="729" height="424" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOclZcKKacfW-nUlsmBHORjhJ-P36CuO2gHagD_CHYWYmeIje_0lWgZe8ZM3agFMKfkeQoA4bNfwL7XbGfb9ZU7Ihtz9QHaT6x3WLYQIiOsPj4-E5SbRLpRt6NRMDYTqg4B22pY_FtYaE/s640/interesting+vs+relevant.png" width="640" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i>"Interesting" vs. "Relevant" in analytic reporting</i></td></tr>
</tbody></table>
<div class="separator" style="clear: both; text-align: center;">
</div>
<span style="font-family: Georgia, Times New Roman, serif;"><i>Interesting and relevant</i> information should always be considered for use in a report or brief. Length or time limits might preclude it, but if it meets both criteria, and particularly if it is a linchpin or a driver of the analysis, this kind of info highly likely belongs in the report.</span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><br /></span>
<span style="font-family: Georgia, Times New Roman, serif;"><i>Relevant information which is not particularly interesting</i> might have to go in the report--it may be too relevant not to include. However, there are many ways to get this kind of info in the report or brief. Depending on the info's overall importance to the analysis, it might be possible to include it in a footnote, annex, or backup slide instead of cluttering up the main body of the analysis.</span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><br /></span>
<span style="font-family: Georgia, Times New Roman, serif;">Information that is <i>interesting but not relevant </i>is COOLINT. It is that neat little historical anecdote that has nothing to do with the problem, or that very cool image that doesn't really explain anything at all. The temptation to get this stuff into the report or brief is great. I have seen analysts twist themselves into knots to try to get a particular piece of COOLINT into a briefing or report. Don't do it. Put it in a footnote or an annex if you have to, and hope the decisionmaker asks you a question where your answer can start with, "As it so happens..."</span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><br /></span>
<span style="font-family: Georgia, Times New Roman, serif;">Info which is <i>not interesting and not relevant</i> needs to be left out of the report. I hope this goes without saying.</span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><br /></span>
<span style="font-family: Georgia, Times New Roman, serif;">Three caveats to this way of thinking about info. First, I have presented this as if the decision is binary--info is either relevant OR irrelevant, interesting OR uninteresting. That isn't really how it works. It is probably better to think of these terms as if they were on a scale that weighs both criteria. It is possible, in other words, to be "kind of interesting" or "really relevant."</span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><br /></span>
<span style="font-family: Georgia, Times New Roman, serif;">The other caveat is that both the terms interesting and relevant should be defined in terms of the decisionmaker and the intelligence requirement. Relevancy, in other words, is relevancy to the question; "interesting", on the other hand, is about communication. What is interesting to one decisionmaker might not be to another.</span><br />
<span style="font-family: Georgia, Times New Roman, serif;"><br /></span>
<span style="font-family: Georgia, Times New Roman, serif;">Finally, if you use this at all, use it as a rule of thumb, not as a law. There are always exceptions to these kinds of models. </span>Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com0tag:blogger.com,1999:blog-2569772432953120875.post-30396211486032838842019-06-10T11:31:00.000-04:002019-06-10T11:31:12.126-04:00How To Teach 2500 Years Of Intelligence History In About An Hour<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEin93KuZ_cBqelOfPTVSye-d_uSakzssDEYAHsI1QUOOnPcytj49FKJYMK8D36hGdIgkoQ4ArhGh7tXmBoFXRS2uEHM0xjx6rOHCZVQO2GcAg1Bl5_Yrb_uBP9LJUHJQPDnH2kU8AAZPcE/s1600/Inscribed_bamboo-slips_of_Art_of_War.jpg" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1067" data-original-width="1600" height="266" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEin93KuZ_cBqelOfPTVSye-d_uSakzssDEYAHsI1QUOOnPcytj49FKJYMK8D36hGdIgkoQ4ArhGh7tXmBoFXRS2uEHM0xjx6rOHCZVQO2GcAg1Bl5_Yrb_uBP9LJUHJQPDnH2kU8AAZPcE/s400/Inscribed_bamboo-slips_of_Art_of_War.jpg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><a href="https://en.wikipedia.org/wiki/Sun_Tzu"><i>Original version of the Art of War by Sun-Tzu</i></a></td></tr>
</tbody></table>
<span style="font-family: "georgia" , "times new roman" , serif;">As with most survey courses, Introduction to Intelligence Studies has a ton of information that it needs to cover--all of it an inch deep and mile wide. One of the most difficult parts of the syllabus to teach, however, is intelligence history.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">Whether you start with the <a href="https://biblehub.com/joshua/2-1.htm">Bible</a> or, as I do, with<a href="https://ebooks.adelaide.edu.au/s/suntzu/art-of-war/chapter13.html"> Chapter 13 of <i>The Art Of War</i></a>, you still have 2500 years of history to cover and typically about an hour long class to do it. Don't get me wrong. I think the history of intelligence ought to be at least a full course in any intelligence studies curriculum. The truth is, though, you just don't have time to do it justice in a typical Intel 101 course.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">I was confronted with this exact problem last year. I had not taught first-year students for years, and when the time came in the syllabus to introduce these students to intel history, I was at a bit of a loss. Some professors gloss over ancient history and start with the National Security Act of 1947. Some compress it even more and focus entirely on post Cold War intelligence history. Others take a more expansive view and select interesting stories from different periods of time to illustrate the general role of intelligence across history. </span><br />
<span style="font-family: "georgia" , "times new roman" , serif;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif;">All of these approaches are legitimate given the topic and the time constraints. I wanted, however, to try to make the history of intel a bit more manageable for students new to the discipline. I hit on an approach that makes sense to me and seemed to work well with the students. I call it the <i>Four Ages Of Intelligence</i>.</span><br />
<span style="font-family: "georgia" , "times new roman" , serif; text-align: justify; text-indent: 32px;"><br /></span>
<span style="font-family: "georgia" , "times new roman" , serif; text-align: justify; text-indent: 32px;">The first age I call the <i>Age of Concentration</i>. In ancient times, power and knowledge was concentrated in the hands of a relatively small number of people. The king or queen, their generals, and the small number of officers and courtiers who could read or write were typically both the originators and targets of intelligence efforts. These efforts, in turn, were often guided by the most senior people in a government. Sun Tzu noted, "</span><span style="background-color: #fcfff6; text-align: justify; text-indent: 32px;"><span style="font-family: "georgia" , serif;">Hence it is that which none in the whole army are more intimate relations to be maintained than with spies." George Washington, as well, was famous not only as a general but also as a <a href="https://www.mountvernon.org/george-washington/the-revolutionary-war/spying-and-espionage/george-washington-spymaster/">spymaster</a>. </span></span><br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<span style="background-color: #fcfff6; text-align: justify; text-indent: 32px;"><span style="font-family: "georgia" , serif;"><br /></span></span>
<span style="background-color: #fcfff6; text-align: justify; text-indent: 32px;"><span style="font-family: "georgia" , serif;">The <i>Age of Concentration</i> lasted, in my mind, from earliest times to about the early 1800's. The nature of warfare began to change rapidly after the American and French Revolutions. <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwHs-7cap0JAYjSj_A9BRkcTBLoNJYBU0elNb8v8NEWUYLZbdvtfm4LyU0CtF0IXpK4Qm24opMVtAV3rSvXOQiDsr_LLhixepkyemSal9EzBixq-bGgYOIbDOXDMo5bufO4-gl97tRlJQ/s1600/The_Capture_of_the_Hessians_at_Trenton_December_26_1776.jpeg" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="1063" data-original-width="1600" height="265" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwHs-7cap0JAYjSj_A9BRkcTBLoNJYBU0elNb8v8NEWUYLZbdvtfm4LyU0CtF0IXpK4Qm24opMVtAV3rSvXOQiDsr_LLhixepkyemSal9EzBixq-bGgYOIbDOXDMo5bufO4-gl97tRlJQ/s400/The_Capture_of_the_Hessians_at_Trenton_December_26_1776.jpeg" width="400" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i><a href="https://en.wikipedia.org/wiki/Military_career_of_George_Washington#Espionage">Washington and the capture of the Hessians at Trenton. </a> </i></td></tr>
</tbody></table>
Large citizen armies and significant technological advances (railroads, telegraphs, photography, balloons!) made the process of running spy rings and collating and analyzing the information they collected too large for any one person or even a small group of people to manage. </span></span><br />
<span style="background-color: #fcfff6; text-align: justify; text-indent: 32px;"><span style="font-family: "georgia" , serif;"><br /></span></span>
<span style="background-color: #fcfff6; text-align: justify; text-indent: 32px;"><span style="font-family: "georgia" , serif;">Enter the <i>Age of Professionalization</i>. The 1800's saw the rise of the <a href="https://en.wikipedia.org/wiki/Staff_(military)">staff system</a> and the <a href="https://en.wikipedia.org/wiki/Civil_service">modern civil service</a> to help generals and leaders manage all the things these more modern militaries and governments had to do. Of course, there had always been courtiers and others to do the king's business but now there was a need for a large number of professionals to deal with the ever-growing complexities of society. The need for more professionals, in turn, demanded standardized processes that could be taught. </span></span><br />
<span style="background-color: #fcfff6; text-align: justify; text-indent: 32px;"><span style="font-family: "georgia" , serif;"><br /></span></span>
<span style="background-color: #fcfff6; text-align: justify; text-indent: 32px;"><span style="font-family: "georgia" , serif;">For me, the <i>Age of Professionalization</i> lasted until the end of World War II when the <i>Age of Institutionalization</i> began. Governments, particularly the US Government, began to see the need for permanent and relatively large intelligence organizations as a fundamental part of government. <table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: right; margin-left: 1em; text-align: right;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihDNf1Xz0vwVmUs_m6MweYp4-oh-NlGdqHO1hAZi67qABDkwLyBLTxz56Fi6Z05bbD8oVW80uPUckzoxuQFfqIWG0wFsZ9Ou9MEIzsuygO6Cr_c9C-KfEFlPFIZIVnJCqYOLIAFRTrYDg/s1600/cia+KGB+pic.jpg" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="720" data-original-width="960" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihDNf1Xz0vwVmUs_m6MweYp4-oh-NlGdqHO1hAZi67qABDkwLyBLTxz56Fi6Z05bbD8oVW80uPUckzoxuQFfqIWG0wFsZ9Ou9MEIzsuygO6Cr_c9C-KfEFlPFIZIVnJCqYOLIAFRTrYDg/s320/cia+KGB+pic.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;"><i>Logos of the CIA And KGB</i></td></tr>
</tbody></table>
Staffs and budgets grew. Many organizations came (more or less) out of the shadows. CIA, KGB, MI5 (and 6), ISI, and MSS all became well known abbreviations for intelligence agencies. The need for intelligence-like collection and analysis of information became obvious in other areas. Law enforcement agencies, businesses, and even international organizations started to develop "intelligence units" within their organizational structures. </span></span><br />
<span style="background-color: #fcfff6; text-align: justify; text-indent: 32px;"><span style="font-family: "georgia" , serif;"><br /></span></span>
<span style="background-color: #fcfff6; text-align: justify; text-indent: 32px;"><span style="font-family: "georgia" , serif;">All of this lasted until about 1994 when, with the advent of the World Wide Web, the <i>Age of Democratization</i> began. Seven years ago (!), I wrote an article called <a href="https://sourcesandmethods.blogspot.com/search?q=5+things+only+spies+used+to+do">"Top Five Things Only Spies Used To Do But Everyone Does Now."</a> I talked about a whole bunch of things, like using sophisticated ciphers to encrypt data and examining detailed satellite photos, that used to be the purview of spies and spies alone. Since then, it has only gotten worse. <a href="https://www.justice.gov/storage/report.pdf">Massive internet based deception operations</a> and the rise of <a href="https://en.wikipedia.org/wiki/Deepfake">deepfake technology</a> is turning us all into spymasters, weighing and sorting information wheat from information chaff. Not only the threats but also the opportunities have grown exponentially. For savvy users, there is also more good information, a greater ability to connect and learn, to understand the things that are critical to their success or failure but are outside their control, than ever before--and to do this on a personal rather than institutional level.</span></span><br />
<span style="background-color: #fcfff6; text-align: justify; text-indent: 32px;"><span style="font-family: "georgia" , serif;"><br /></span></span>
<span style="background-color: #fcfff6; text-align: justify; text-indent: 32px;"><span style="font-family: "georgia" , serif;">There are a couple of additional teaching points worth making here. First is the role of information technology in all of this. As the technology for communicating and coordinating activities has improved, the intelligence task has become more and more complicated. This, in turn, has required the use of more and more people to manage the process, and that has changed how the process is done. Other disciplines have been forced to evolve in the face of technological change. It is no surprise, then, that intelligence is also subject to similar evolutionary pressures.</span></span><br />
<span style="background-color: #fcfff6; text-align: justify; text-indent: 32px;"><span style="font-family: "georgia" , serif;"><br /></span></span>
<span style="background-color: #fcfff6; text-align: justify; text-indent: 32px;"><span style="font-family: "georgia" , serif;">It is also noteworthy, however, that the various ages of intelligence have tended to become shorter with the near-logarithmic growth in technological capabilities. In fact, when you map the length of the four ages on a logarithmic scale (see below) and draw a trendline, you can see a pretty good fit. It also appears that the length of the current age, the <i>Age of Democratization</i>, might be a bit past its sell-by date. This, of course, begs the question: What age comes next? I'm voting for the <i>Age of Anarchy</i>...and I am only half kidding.</span></span><br />
<span style="background-color: #fcfff6; text-align: justify; text-indent: 32px;"><span style="font-family: "georgia" , serif;"><br /></span></span>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiO9Rv-EZ2E2dtpORji8gUFzFj69jJ7iFrcG8v9Yl68cr-z3KoUKzCpCt1GtrAM3eMKKu1Q9kpY73UjV4JUWDIHA3I3MaOwjIRFZGgKLR5avekJ5Xp4tLvrQk8UR_BBSQloL2KuQ1Z5ZWU/s1600/ages+of+intel+and+length.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="378" data-original-width="719" height="336" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiO9Rv-EZ2E2dtpORji8gUFzFj69jJ7iFrcG8v9Yl68cr-z3KoUKzCpCt1GtrAM3eMKKu1Q9kpY73UjV4JUWDIHA3I3MaOwjIRFZGgKLR5avekJ5Xp4tLvrQk8UR_BBSQloL2KuQ1Z5ZWU/s640/ages+of+intel+and+length.PNG" width="640" /></a></div>
<br />
<span style="background-color: #fcfff6; text-align: justify; text-indent: 32px;"><span style="font-family: "georgia" , serif;">Is this a perfect way of thinking about the history of intelligence? No, of course not. There are many, many exceptions to these broad patterns that I see. Still, in a survey class, with limited time to cover the topic, I think focusing on these broad patterns that seemed to dominate makes some sense. </span></span>Kristan J. Wheatonhttp://www.blogger.com/profile/02566135545863154089noreply@blogger.com3