Tuesday, July 30, 2024

Center Of Mass (Or How To Think Strategically About Generative AI)

It may seem like generative AI is moving too fast right now for cogent strategic thinking.  At the edges of it, that is probably right.  Those "up in the high country," as Lloyd Bridges might put it (see clip below), are dealing with incalculably difficult technical and ethical challenges and opportunities as each new version of Claude, ChatGPT, Gemini, Llama, or other foundational large language model tries to outperform yesterday's release.

 

That said, while all this churn and hype is very real at the margins, I have seen a fairly stable center start to emerge since November, 2022 when ChatGPT first released.  What do I mean, then, by "a fairly stable center?

For the last 20 months, my students, colleagues, and I have been using a wide variety of generative AI models on all sorts of problems.  Much of this effort has been exploratory, designed to test these tools against realistic, if not real, problems.  Some of it has been real, though--double-checked and verified--but real products for real people.  

It has never been standalone however. No one in the center of mass is ready or comfortable completely turning over anything but scut work to the AIs.  In short, anyone who uses a commercially available AI on a regular basis to do regular work rapidly comes to see them as useful assistants, unable to do most work unsupervised, but of enormous benefit otherwise. 

What else have I learned over the last 20 months? 

As I look at much of what I have written recently, it has almost all been about generative AI and how to think about it.  My target audience has always been regular people looking for an edge in doing regular work--the center of mass.  My goal has been to find the universals--the things that I think are common to a "normal" experience with generative AI.  I don't want to trivialize the legitimate concerns about what generative AIs might be able to do in the future, nor to suggest I have some sort of deep technical insights into how it all works or how to make it better.  I do want to understand, at scale, what it might be good for today and how best to think about it strategically.

My sources of information include my own day-to-day experience of the grind with and without generative AI.  I can supplement that with the experiences of dozens of students and my faculty colleagues (as well as with what little research is currently available).  All together, we think we have learned a lot of "big picture" lessons.  Seven to be exact:
  1. Generative AI is neither a savior nor Satan.  Most people start out in one of these two camps.  The more you play around with generative AIs, the more you realize that both points of view are wrong and that the truth is more nuanced.
  2. Generative AI is so fast it fools you into thinking it is better than it is.  Generative AI is blindingly fast.  A study done last year using writing tasks for midlevel professionals found that participants were 40% faster at completing the task when they used the then current version of ChatGPT.  Once they got past the awe they felt at the speed of the response, most of my students, however, said the quality of the output was little better than average.  The same study mentioned earlier found similar results.  The speed improved 40% but the average quality of the writing only improved 18%.
  3. Generative AI is better at form than content.  Content is what you want to say and form is how you want to say it.  Form can be vastly more important than content if the goal is too communicate effectively.  You'd probably explain Keynesian economics to middle-schoolers differently than you would to PHD candidates, for example.  Generative AI generally excels at re-packaging content from one form to another.  
  4. Generative AI works best if you already know your stuff.  Generative AI is pretty good and it is getting better fast.  But it does make mistakes.  Sometimes it is just plain wrong and sometimes it makes stuff up.  If you know your discipline already, most of these errors are easy to spot and correct.  If you don't know your discipline already, then you are swimming at your own risk.
  5. Good questions are becoming more valuable than good answers.  In terms of absolute costs to an individual user, generative AI is pretty cheap and the cost of a good or good enough answer is plummeting as a result.  This, in turn, implies that the value of good question is going up.  Figuring out how to ask better questions at scale is one largely unexplored way to get a lot more out of a generative AI investment.
  6. Yesterday's philosophy is tomorrow's AI safeguard.  AI is good at some ethical issues, lousy at others (and is a terrible forecaster).  A broad understanding of a couple thousand years of philosophical thinking about right and wrong can actually help you navigate these waters.
  7. There is a difference between intelligence and wisdom.  There is a growing body of researchers who are looking beyond the current fascination with artificial intelligence and towards what some of them are calling "artificial wisdom."  This difference--between intelligence and wisdom--is a useful distinction that captures much of the strategic unease with current generative AIs in a single word.
These "universals" have all held up pretty well since I first started formulating them a little over a year ago.  While I am certain they will change over time and that I might not be able to attest to any of them this time next year, right now they represent useful starting points for a wide variety of strategic thought exercises about generative AIs.

No comments: