Monday, February 5, 2024

The Battle of Moore's Chasm And Who Will Win The Next War

There is a battle going on right now.  It is being fought by every military in the world.  

Victory in this battle is crucial.  The militaries' on the winning side will likely be on the winning side of the next large-scale war.  The losers will likely be forgotten, studied only for the mistakes they made.

This is the Battle of Moore's Chasm.

This battle is taking place everywhere.  There are physical manifestations of it in Ukraine, the Taiwan Strait, and Gaza, but there are equally important conceptual and theoretical manifestations of it in the Pentagon, on Arbatskaya Square in Moscow, and deep inside the August 1 Building in Beijing.

What this battle is about and how to win it are the subjects of this article.

What Is The Battle Of Moore's Chasm?

To understand this battle it is necessary, at first, to travel back to 1962.  It was then that a young professor of rural sociology, Everett Rogers, published what was to become the second most cited book in all the social sciences, Diffusion of Innovations 

While the book contains much that is still relevant today, the part that is important to the current battle is the idea that the "market" for an idea, an innovation, a new concept, or a technology generally follows a bell curve and that this bell curve can be divided into five major sections of users (See chart below):  Innovators, Early Adopters, Early Majority, Late Majority and Laggards. 

Source:  https://en.wikipedia.org/wiki/Diffusion_of_innovations

Fast forward to 1989, when two researchers at the famous consulting firm, Regis McKenna, Inc. (RMI), Warren Schirtzinger and James Lee, hypothesized and then demonstrated that there was a "chasm" between the early adopters and the early majority.  

This chasm existed largely due to the different motivations of the members of these groups.  Innovators and Early Adopters are very much into cool, new things.  They tend to be more enamored with the potential of a new technology or process than they are with the utility or scalability of these products.  Early and Late Majority motivations, on the other hand, typically have more to do with solving particular problems and doing so at the lowest cost and at a scale that is appropriate for their organization.

Another researcher at RMI, Geoffrey Moore, picked up on the idea and, in 1991, published what was to become one of the most influential business books ever, Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers. Now in its third edition, it has sold over a million copies and is considered by Inc. magazine as one of the top ten marketing books ever written. Think Insights has a good article that lays out the main ideas in detail, but for our purposes, their chart showing the chasm is sufficient:

Think Insights (January 3, 2024) Crossing The Chasm – Technology Adoption Lifecycle. Retrieved from https://thinkinsights.net/strategy/crossing-the-chasm/.

Most importantly, Moore's Chasm has become synonymous with the place where good ideas go to die.  Whether it is a lack of capital, innovator inexperience, or an inability to get traction in the much more lucrative Early and Late Majority markets, failure to bridge the chasm leads, at best, to relegation to a niche market and, at worst, to inevitable decline and bankruptcy. 

While almost all of these ideas and the literature accompanying the chasm have come out of business journals, it has a direct and immediate correlation with issues faced by militaries around the world.  Indeed, Secretary of the Army, Christine Wormuth recently said:

“This is a crucial moment for the Army to summon our ingenuity, to innovate and invest in emerging technologies, to test and develop in uncharted areas like artificial intelligence and contested domains like space and cyber, to reshape and transform the force to be more adaptable and flexible.”

Yet, across the globe, the difference between how much militaries want to innovate and how much they are actually innovating seems to be heading in the wrong direction.  As the Boston Consulting Group highlighted in its report last year on the defense innovation readiness gap:

"One of (the report's) most important findings is that the defense innovation readiness gap significantly increased in the year since our first study. Across 10 of the 11 dimensions of readiness assessed, MoDs failed to match their 2021 results, by an average of 8%."

Moreover, there is some evidence to suggest that this chasm exists within the US Department of Defense as well.  A recent report by the Center for Security and Emerging Technology put it this way:

"However, under the DOD’s current organizational structure, defense innovation is disconnected from defense procurement. This division limits innovation offices’ ability to affect technological change across the military and excuses procurement offices from integrating cutting-edge capabilities into major systems and platforms." (Italics mine)

The Battle of Moore's Chasm is real, and right now, no one is winning.

Who Will Fight This Battle?

While there are a number of possible ways to win the battle (the CSET paper, for example, references three), all of these courses of action require the right people to implement them.  Acquisition officers, policy wonks, commanders, and others all do and will have their role to play.  The most important warrior in this battle, however, is the innovation champion.

Developed about the same time as Diffusion of Innovations Theory, the idea of an innovation champion was first put forward by Dr. Donald Schön in the Harvard Business Review article, "Champions for radical new inventions."  Since then, thousands of articles (Google Scholar says about 2140) have been written about the role, traits, and importance of innovation champions in driving modernization and incorporating emerging technologies across a wide variety of fields.  

All of the more modern definitions of innovation champion are similar to the one developed by researchers at the German Graduate School of Management and Law:  "an innovation champion is an individual or a group of individuals who is willing to take risks to enthusiastically promote innovations through the various stages of the development process."


This same paper identified five skills, seven traits, and three different kinds of knowledge that were characteristic of innovation champions based on a systematic literature analysis looking at 85 of the most influential journal articles on the topic (See image to the left).

The approach here is similar to the approach taken by the US Army in teaching leadership.  With leadership, the Army focuses on Attributes (roughly equivalent to Traits in the chart to the left) and Competencies (roughly equivalent to Skills and Knowledge in the chart).  A fundamental premise of Army leadership training is that "most people have leadership potential and can learn to be effective leaders."  The same could be said, perhaps, for innovation champions.

While the approach is similar, there is not a one-to-one correlation between what the Army thinks makes a good leader and what is necessary for an innovation champion (See chart below and to the right).

Source:  ADP 6-22, ARMY LEADERSHIP
AND THE PROFESSION, 2019

 
In short, while routine Army leadership training likely covers many of the attributes of an innovation champion, it is equally likely that there are several gaps that will need to be filled if the Army is to have the warriors it needs for the ongoing battle.

Specifically, having the minimal technical knowledge necessary to champion particular innovations jumps out as one such requirement.  Many soldiers are so deeply involved in the day-to-day activities of running the Army or fighting in the country's conflicts, that they have little time for understanding arcane emerging technologies such as 3D printing, quantum computing, synthetic biology, 6 and 7G telecommunications systems, augmented reality, and others. Yet decisions, potentially costing billions of dollars, regarding the development, testing and fielding of these technologies will need to be made regularly and soon if the US Army's technical advantage is to remain.

Likewise, would-be innovation champions will need to learn the transformational leadership skills necessary to manage teams of experts from disparate fields.  Most military officers have grown up in an environment similar to Machiavelli's Kingdom of the Turk, which "is governed by one lord, the others are his servants; and, dividing his kingdom into sanjaks, he sends there different administrators, and shifts and changes them as he chooses."  

This hierarchical organization with its emphasis on commanders and their intent suddenly gives way when confronted by interdisciplinary teams of experts and contractors in the diverse technical fields common to innovation activities.  Here the comfortable chain of command often is replaced with something akin to Machiavelli's Kingdom of the Franks, where officers find themselves "placed in the midst of an ancient body of lords, acknowledged by their own subjects, and beloved by them; they have their own prerogatives, nor can the king take these away except at his peril."  Leading innovation activities, in short, requires different skills than leading at the tactical and operational levels.

Where Will These Champions Come From?

Some of these Skills and Knowledge categories also typically require a certain level of experience.  For example, all officers understand their organization to a certain extent, but it takes a relatively senior officer to have a feel for the entire enterprise.  Likewise, officers, as they move from one assignment to another, develop useful networks, but the kind of depth and breadth necessary to lead innovation activities typically requires a deeper rolodex.  

This kind of officer with the experience, organizational understanding, and networks to do this kind of work are generally at the level of Lieutenant Colonel and Colonel, the O5's and O6's of the Army.  LTC Richard Brown put it bluntly in his essay for AUSA, "Staff colonels are the Army’s innovation center of gravity."

Officers this senior can often come with some baggage as well, however.  For example, unless an officer's career has been carefully managed, it is certainly possible that some of the essential Traits of an innovation champion, such as creativity, risk-taking, or optimism, have been suppressed or even beaten out by an unforgiving system.  Fortunately, the right training and environment allows much of this damage to be repaired.  Creativity, for example, "is something you practice...not just a talent you are born with."

All this--filling in technical knowledge and leadership gaps while simultaneously re-energizing officers closer to the end of their careers than to the beginning--is, in military terms, a "heavy lift," a difficult, perhaps impossible, job.  Making it even more challenging is the fact that there is only one realistic opportunity to do it and that is at a senior service college.  In the Army's case, that is the US Army War College.  

The War College, as it turns out, is the critical chokepoint in the Battle of Moore's Chasm.

The 10 month stint at the War College comprises the last in-depth, formal military education most senior officers will receive.  After this, they typically move on to senior staff positions or take command of brigade sized units.  A relatively few of these graduates will go on to become generals and most will complete only one or two more assignments before retiring.  If officers don't get it at the War College, they are unlikely to get this kind of specialized education and training once they get back to the field.

Fortunately, I think the War College understands this generally and I am involved in two specific activities that are deliberately designed to address these challenges, the Futures Seminar and the Futures Lab.

The Futures Seminar use real questions from real senior defense officials to jumpstart a year long project designed, typically, to not only delve deep into the world of technology as well as more generalized "futures-thinking" but also to gain practical skills in managing highly diverse teams of experts as the students seek to integrate their thinking in pursuit of the best possible answer to their sponsor's question.

The Futures Lab also seeks to fill the tech knowledge gap but in a more hands-on way, allowing students an opportunity to spend as much or as little time as they want learning the ins-and-outs of technologies such as 3D printing, drones, virtual reality, and robots.  With a wide variety of technologies and expert assistance available, the Lab creates an environment designed to re-awaken creativity, enthusiasm, and risk-taking.

Who will win?

Andrew Krepinevich, a military strategist and award winning author, in his recent book, The Origins of Victory: How Disruptive Military Innovation Determines the Fates of Great Powers, states:

"Viewed from a lagging competitor’s perspective, failing to keep pace in exploiting the potential of an emerging military revolution risks operating at a severe disadvantage. Consequently, the common challenge for all major-power militaries in a period of military revolution is to be the first to identify its salient characteristics and exploit its potential. Silver medals are not awarded to those who come in second."

If the side that innovates best, that not only employs emerging technologies but also combines them into a system where the whole can be more than the sum of its parts, is the side that wins, then the crucial battle, the first fight, is the Battle of Moore's Chasm, and the US Army will need trained and ready innovation champions to win it.

Note:  The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Army, Department of Defense, or the U.S. Government. 

Tuesday, December 12, 2023

Forget Artificial Intelligence. What We Need Is Artificial Wisdom

I have been thinking a lot about what it means to be "wise" in the 21st Century.

Wisdom, for many people, is something that you accrue over a lifetime.  "Wisdom is the daughter of experience" insisted Leonardo Da Vinci.  Moreover, the sense that experience and wisdom are linked seems universal.  There's an African proverb, for example, of which I am particularly fond that claims, "When an old person dies, a library burns to the ground."  

Not all old people are wise, of course.  Experience sometimes erodes a person, like the steady drip-drip of water on a stone, such that, in the end, there is nothing but a damn fool left.  We have long had sayings about that as well.

Experience, then, probably isn't the only way to become wise and may not even be a necessary pre-condition for wisdom.  How then to define it?

One thing I do know is that people still want wisdom, at least in their leaders.  I know this because I asked my contacts on LinkedIn about it.  100 responses later virtually everyone said they would rather have a wise leader than an intelligent one.  

These results suggest something else as well:  That people know wisdom when they see it.  In other words, the understanding of what wisdom is or isn't is not something that is taught but rather something that is learned implicitly, by watching and evaluating the actions of ourselves and others.

Nowhere is this more obvious than in the non-technical critiques of artificial intelligence (AI).  All of these authors seem nervous, even frightened, about the elements of humanity that are missing in the flawed but powerful versions of AI that have recently been released upon the world.  The AIs, in their view, seem to lack moral maturity, reflective strategic decision-making, and an ability to manage uncertainty and no one, least of all the authors of these critiques, wants AIs without these attributes to be making decisions that might change, well, everything.  This angst seems to be a shorthand for a simpler concept, however:  We want these AIs to not just be intelligent, but to be wise.

For me, then, a good bit of the conversation about AI safety, AI alignment, and "effective altruism" comes down to how to define wisdom.  I'm not a good enough philosopher (or theologian) to have the answer to this but I do have some hypotheses.

First, when I try to visualize a very intelligent person who has only average wisdom, I imagine a person who knows a large number of things.  Their knowledge is encyclopedic but their ability to pull things together is limited.  They lack common sense.  In contrast, when I try to imagine someone who is very wise but of just average intelligence, I imagine someone who knows considerably less but can see the connections between things better and, as a result, can envision second and third order consequences.  The image below visualizes how I see this difference:

This visualization, in turn, suggests where we might find the tools to better define artificial wisdom, in network research, graph theory, and computational social science.

I also think there are some hints lurking in biology, psychology, and neuroscience.  Specifically in the study of cognitive biases.  Over the last 30 years or so, in many disciplines cognitive biases have come to be seen as "bad things"--predictable human failures in logical reasoning.  Recently, though, some of the literature has started to question this interpretation.  If cognitive biases are so bad, if they keep us from making rational decisions, then why aren't we all dead?  Why haven't evolutionary pressures weeded out the illogical?  

If you accept the premise that cognitive biases evolved in humans because they were useful (even if only on the savannahs of east Africa), then it sort of begs the question, 'What did they help us do?"

My favorite attempt at answering this question is the Cognitive Bias Codex (See image below).

By Jm3 - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=51528798

Here the authors grouped all of the known cognitive biases into four major categories sorted by what they helped us do:

  • What should we remember
  • What to do when we have too much information
  • What to do when there is not enough meaning
  • What to do when we need to act fast

Interestingly, all of these areas are new areas of research in the AI community (For examples see:  Intentional Forgetting in Artificial Intelligence Systems: Perspectives and Challenges and Intentional Forgetting in Distributed Artificial Intelligence).  

Even the need to act fast, which seems like something at which AI excels, becomes more about wisdom than intelligence when decomposed.  Consider some of the Codex's sub-categories within the need to act fast:

  • We favor simple-looking options and complete information over complex, ambiguous options.
  • To avoid mistakes, we aim to preserve autonomy and group status, and avoid irreversible decisions.
  • To get things done, we tend to complete things we've invested time and energy in.
  • To stay focused, we favor the immediate, relatable thing in front of us.
  • To act, we must be confident we can make an impact and feel what we do is important.

All of these seem to have more to do with wisdom than intelligence.  Furthermore, true wisdom would be most evident in knowing when to apply these rules of thumb and when to engage more deliberative System 2 skills.

As I said, these are just hypotheses, just guesses, based on how I define wisdom.  Despite having thought about it for quite some time, I am virtually certain that I still don't have a good handle on it.

But that is not to say that I don't think there is something there.  Even if only used to help communicate to non-experts the current state of AI (e.g. "Our AIs exhibit some elements of general intelligence but very little wisdom"), it can, perhaps, help describe the state of the art more clearly while also driving research more directly.  

In this regard, it is also worth noting that modern AI dates back to at least the 1950's, and that it has gone through two full blown AI "winters" where most scientists and funders thought that AI would never go anywhere.  In other words, it has taken many years and been a bit of a roller coaster ride to get to where we are today.  It would seem unrealistic to expect artificial wisdom to follow a different path but it is, I would argue, a path worth taking.

Note:  The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Army, Department of Defense, or the U.S. Government. 

Monday, October 30, 2023

The Catch 22 Of Generative AI

A true 3D chart done is the style of
Leonardo Da Vinci (Courtesy MidJourney)
I have always wanted to be able to easily build true 3D charts.  Not one of those imitation ones that just insert a drop shadow behind a 2D column and call it "3D," mind you.  I am talking about a true 3D chart with an X, Y and Z axis.  While I am certain that there are proprietary software packages that do this kind of thing for you, I'm cheap and the free software is either clunky or buggy, and I don't have time for either.

I was excited, then, when I recently watched a video that claimed that ChatGPT could write Python scripts for Blender, the popular open source animation and 3D rendering tool.  I barely know how to use Blender and do not code in Python at all, but am always happy to experiment with ChatGPT.

Armed with very little knowledge and a lot of hope, I opened up ChatGPT and asked it to provide a Python script for Blender that would generate a 3D chart with different colored dots at various points in the 3D space.  I hit enter and was immediately rewarded with what looked like 50 or so lines of code doing precisely what I asked!

I cut and pasted the code into Blender, hit run, and...I got an error message.  So, I copied the error message and pasted it into ChatGPT and asked it to fix the code.  The machine apologized(!) to me for making the mistake and produced new code that it claimed would fix the issue.  

It didn't.

I tried again and again.  Six times I went back to ChatGPT, each time with slightly different error messages from Blender.  Each time, after the "correction," the program failed to run and I received a new error message in return.

Now, I said I didn't know how to code in Python, but that doesn't mean I can't code.  Looking over the error messages, it was obvious to me that the problem was almost certainly something simple, something any Python coder would be able to figure out, correct, and implement.  Such a coder would have saved a vast amount of time as, even when you know what you are doing, 50 lines of code takes a good bit of time to fat-finger.  

In other words, for generative AI to be helpful to me, I would need to know Python, but the reason I went to a generative AI in the first place was because I didn't know Python!  

And therein lies the Catch-22 of generative AI.  

I have seen this same effect in a variety of other situations.  I asked another large language model, Anthropic's Claude, to write a draft of a safety SOP.  It generated a draft very quickly and with surprising accuracy.  There were, however, a number of things that needed to be fixed.  Having written my fair share of safety SOPs back in the day, I was able to quickly make the adjustments.  It saved me a ton of time.  Without understanding what a good safety SOP looked like to begin with, however, the safety SOP created by generative AI risked being, well, unsafe.

At one level, this sounds a lot like some of my previous findings on generative AI such as "Generative AI is a mindnumbingly fast but incredibly average staff officer" or "Generative AI is better at form than content."  And it is.

At another level, however, it speaks to the need for an education system that is both going to keep up with advancements in generative AI while simultaneously maintaining pre-generative AI standards.  The only way, at least for now, to use generative AI safely will be to know more than the AI about the AI's outputs--to know enough to spot the errors.  The only way, in turn, to know more than generative AI is to learn it the old-fashioned way--grind through the material on your own until you are comfortable that you understand it.  Ironically, AI may be able to speed up the grind, but the learning is still on you.  

At another, deeper, level, it is more disturbing.  I worry that people will ask generative AI about things that they think they know but they don't.  Blender acted as a check on both my ignorance and the AI's errors in the first example.  My own experience with safety SOPs acted as a check on the AI in the second example.  What about areas such as political science, security studies, and military strategy where subjectivity reigns?  What if there aren't any checks on the answers generative AI produces?  Dumb questions will lead to incorrect answers which will lead to dumber questions and more incorrect answers--a sort of an AI powered, Dunning-Kruger death spiral.  

This mirrors, of course, one of the many concerns of AI experts.  I also know that there are many good people working hard to ensure that these kinds of scenarios rarely if ever play themselves out.  That said, I am reminded of an old Mark Twain saying that was a near perfect forecast of the problems with social media:  “A lie can travel halfway around the world while the truth is putting on its shoes.”  Perhaps that should be updated for the modern age:  "An AI energized chain reaction of stupid can destroy the world while the prudent are still slipping on their crocs."  

Not as catchy, I suppose, but equally prescient?

Tuesday, October 3, 2023

Generative AI Is Better At Form Than Content

Prudens Futuri prominently displayed in
the US Army War College's new academic building


All communications have at least two aspects, form and content.  Content is what we want to say.  It could be about politics, or philosophy, or here at the War College, rather predictably, about war.

Form is different.  The medium through which we express ideas impacts how we communicate those ideas. For instance, the same core concepts about politics, philosophy, or war can be conveyed very differently depending on whether they are presented through a 20 minute PowerPoint presentation or a 2,000 word paper. The form we choose defines how we communicate the message.

Let me give you an example.  My current favorite generative AI is Anthropic's Claude.  I asked it to explain the meaning of the US Army War College's motto, "Prudens Futuri," to the incoming class of students.  Here is what it came back with:

"The Latin phrase "prudens futuri" translates to "foreseeing the future" or "provident of the future." It refers to the wisdom and foresight required of effective leaders. Prudent leaders do not simply react to current circumstances - they have a vision for the future and proactively position their forces to capitalize on anticipated events. As Sun Tzu advised, foreknowledge enables leaders to seize opportunities and gain advantage. Prudens futuri requires gathering intelligence, conducting analysis, forecasting threats, and planning for contingencies. Leaders who excel at prudens futuri can execute bold, unconventional strategies to defeat enemies who rely on reactionary thinking. Prudens futuri epitomizes the innovative, forward-looking mindset the Army must adopt to maintain strategic dominance."

Not bad.  I also asked it to explain the same thing to a 13 year old:

"'Prudens futuri' is a Latin phrase that means "foreseeing the future." It's about being wise enough to predict what might happen next. Great leaders don't just respond to what's in front of them - they think ahead. Imagine you're a general planning to attack an enemy. A wise general would send scouts to spy first and learn the enemy's weaknesses. Then the general can make a smart battle plan to sneak around and surprise them from behind. The Latin phrase teaches that real leaders don't just react - they use their brains to see the future and stay one step ahead. That's what "prudens futuri," or "foreseeing the future," is all about!"

Also not bad--particularly given the change in audience.  

It is even possible to ask the same of a text-to-image AI like Midjourney, used to create the image below from the same prompt:


In this image, I am particularly taken with the way the AI combined historical references to foresight, such as the Oracle of Delphi, with more futuristic concepts like the (apparent) space elevator in the background.  In fact, I find asking text-to-image generators to imagine some futuristic scene to be a very practical way to fill in gaps in futures thinking.  I inevitably see something that makes sense in the image that was not accounted for in the text (kind of like The Spandrels of San Marco and the Panglossian Paradigm).

My point?  I have yet to have a generative AI create something that did not need tweaking, at the least, on the content side.  I have been really happy, however, with generative AI's ability to master particular forms.  

This is one of the reasons, I think, I have quite recently become a bit uncomfortable with policies that talk about citing a generative AI as if it were a source.  It is, I suppose...but it seems less of a source than Wikipedia, and, while I love Wikipedia and believe it is one of the great wonders of the modern world, I would not cite Wikipedia for anything other than background.  I require my students, for example, to find a reputable source to validate anything that a generative AI might come up with when making an estimate.  And, if you are going to make a student find a reputable source anyway, why would they need the generative AI at all?  The answer, of course, is for the form.  

This may not be true forever.  Generative AI is getting better at a brisk pace.  There may come a day when generative AI is looked upon as an authority, equal to peer-reviewed papers.  Until that time, we should still appreciate its talents for helping to craft the message. For now, generative AI is an unparalleled writing partner, not an independent thinker. By acknowledging its current limits alongside its awesome potential, we grant generative AI its proper place: revolutionizing how we communicate knowledge, while established methods still reign over what we know.

Wednesday, August 16, 2023

Answers For Pennies, Insights For Dollars: Generative AI And The Question Economy

No one seems to know exactly where the boom in Generative AIs (like ChatGPT and Claude) will lead us, but one thing is for certain:  These tools are rapidly driving down the cost of getting a good (or, at least, good enough) answer very quickly.  Moreover, they are likely to continue to do so for quite some time.  

The data is notional
but the trend is unquestionable, I think.

To be honest, this has been a trend since at least the mid-1800's with the widespread establishment of public libraries in the US and UK.  Since then, improvements in cataloging, the professionalization of the workforce, and technology, among other things, worked to drive down the cost of getting a good answer (See chart to the right).

The quest for a less expensive but still good answer accelerated, of course, with the introduction of the World Wide Web in the mid-1990's, driving down the cost of answering even tough questions.  While misinformation, disinformation, and the unspeakable horror that social media has become will continue to lead many people astray, savvy users are better able to find consistently good answers to harder and more obscure questions than ever before.  

If the internet accelerated this historical trend of driving down the cost of getting a good answer, the roll-out of generative AI to the public in late 2022 tied a rocket to its backside and pushed it off a cliff.  Hallucinations and bias to the side, the simple truth is that generative AI is, more often than not, able to give pretty good answers to an awful lot of questions and it is free or cheap to use.  

How good is it?  Check out the chart below (Courtesy Visual Capitalist).  GPT-4, OpenAI's best, publicly available, large language model, blows away most standardized tests.  


It is important to note that this chart was made in April, 2023 and represent results from GPT-4.  OpenAI is working on GPT 5 and five months in this field is like a dozen years in any other (Truly.  I have been watching tech evolve for 50 years.  Nothing in my lifetime has ever improved as quickly as generative AIs have).  Eventually, the forces driving these improvements will reach a point of diminishing returns and growth will slow down and maybe even flatline, but that is not the trajectory today.

All this sort of begs a question, though: If answers are getting better, cheaper, and more widely available at an accelerating rate, what's left?  In other words, if no one needs to pay for my answers anymore, what can I offer?  How can I make a living?  Where is the value-added?  This is precisely the sort of thinking that led Goldman-Sachs to predict the loss of 300 million jobs worldwide due to AI.  

My take on it is a little different.  I think that as the cost of a good answer goes down, the value of a good question goes up.  
In short, the winners in the coming AI wars are going to be the ones who can ask the best questions at the most opportune times.  

There is evidence, in fact, that this is already becoming the case.  Go to Google and look for jobs for "prompt engineers."  This term barely existed a year ago.  Today, it is one of the hottest growing fields in AI.  Prompts are just a fancy name for the questions that we ask of generative AI, and a prompt engineer is someone who knows the right questions to ask to get the best possible answers.  There is even a marketplace for these "good questions" called Promptbase where you can, for aa small fee, buy a customizable prompt from someone who has already done the hard work of optimizing the question for you.

Today, earning the qualifications to become a prompt engineer is a combination of on-the-job training and art.  There are some approaches, some magical combination of words, phrases, and techniques, that can be used to get the damn machines to do what you want.  Beyond that, though, much of what works seems to have been discovered by power users who are just messing around with the various generative AIs available for public use.

None of this is a bad thing, of course.  The list of discoveries that have come about from people just messing around or mashing two things together that have not been messed with/mashed together before is both long and honorable.  At some point, though, we are going to have to do more than that.  At some point, we are going to have to start teaching people how to ask better questions of AI.

The idea that asking the right question is not only smart but essential is a old one:

“A prudent question is one-half of wisdom.” – Francis Bacon
"The art of proposing a question must be held of higher value than solving it.” – Georg Cantor
“If you do not know how to ask the right question, you discover nothing.” – W. Edwards Deming

And we often think that at least one purpose of education, certainly of higher education, is to teach students how to think critically; how, in essence to ask better questions.  

But is that really true?  Virtually our whole education system is structured around evaluating the quality of student answers.  We may think that we educate children and adults to ask probing, insightful questions but we grade, promote, and celebrate students for the number of answers they get right.  

What would a test based not on the quality of the answers given but on the quality of the questions asked even look like?  What criteria would you use to evaluate a question?  How would you create a question rubric?  

Let me give you an example.  Imagine you have told a group of students that they are going to pretend that they are about to go into a job interview.  They know, as with most interviews, that once the interview is over, they will get asked, "Do you have any questions for us?"  You task the students to come up with interesting questions to ask the interviewer.

Here is what you get from the students:
  1. What are the biggest challenges that I might face in this position?
  2. What are the next steps in the hiring process?
  3. What’s different about working here than anywhere else you’ve ever worked?
What do you think?  Which question is the most interesting?  Which question gets the highest grade?  If you are like the vast majority of the people I have asked, you say #3.  But why?  Sure, you can come up with reasons after the fact (humans are good at that), but where is the research that indicates why an interesting question is...well, interesting?  It doesn't exist (to my knowledge anyway).  We are left, like Justice Stewart and the definition of pornography, with "I know it when I see it."

What about "hard" questions?  Or "insightful" questions?  Knowing the criteria for each of these and teaching those criteria such that students can reliably ask better questions under a variety of circumstances seems like the key to getting the most out of AI.  There is very little research, however, on what these criteria are.  There are some hypotheses to be sure, but statistically significant, peer-reviewed research is thin on the ground.

This represents an opportunity, of course, for intellectual overmatch.  If there is very little real research in this space, then any meaningful contribution is likely to move the discipline forward significantly.  If what you ask in the AI-enabled future really is going to be more important than what you know, then such an investment seems not just prudent, but an absolute no-brainer.