It may seem like generative AI is moving too fast right now for cogent strategic thinking. At the edges of it, that is probably right. Those "up in the high country," as Lloyd Bridges might put it (see clip below), are dealing with incalculably difficult technical and ethical challenges and opportunities as each new version of Claude, ChatGPT, Gemini, Llama, or other foundational large language model tries to outperform yesterday's release.
- Generative AI is neither a savior nor Satan. Most people start out in one of these two camps. The more you play around with generative AIs, the more you realize that both points of view are wrong and that the truth is more nuanced.
- Generative AI is so fast it fools you into thinking it is better than it is. Generative AI is blindingly fast. A study done last year using writing tasks for midlevel professionals found that participants were 40% faster at completing the task when they used the then current version of ChatGPT. Once they got past the awe they felt at the speed of the response, most of my students, however, said the quality of the output was little better than average. The same study mentioned earlier found similar results. The speed improved 40% but the average quality of the writing only improved 18%.
- Generative AI is better at form than content. Content is what you want to say and form is how you want to say it. Form can be vastly more important than content if the goal is too communicate effectively. You'd probably explain Keynesian economics to middle-schoolers differently than you would to PHD candidates, for example. Generative AI generally excels at re-packaging content from one form to another.
- Generative AI works best if you already know your stuff. Generative AI is pretty good and it is getting better fast. But it does make mistakes. Sometimes it is just plain wrong and sometimes it makes stuff up. If you know your discipline already, most of these errors are easy to spot and correct. If you don't know your discipline already, then you are swimming at your own risk.
- Good questions are becoming more valuable than good answers. In terms of absolute costs to an individual user, generative AI is pretty cheap and the cost of a good or good enough answer is plummeting as a result. This, in turn, implies that the value of good question is going up. Figuring out how to ask better questions at scale is one largely unexplored way to get a lot more out of a generative AI investment.
- Yesterday's philosophy is tomorrow's AI safeguard. AI is good at some ethical issues, lousy at others (and is a terrible forecaster). A broad understanding of a couple thousand years of philosophical thinking about right and wrong can actually help you navigate these waters.
- There is a difference between intelligence and wisdom. There is a growing body of researchers who are looking beyond the current fascination with artificial intelligence and towards what some of them are calling "artificial wisdom." This difference--between intelligence and wisdom--is a useful distinction that captures much of the strategic unease with current generative AIs in a single word.