Tuesday, March 17, 2026

We Picked the Wrong Monster

We have been telling ourselves stories about artificial beings for as long as we have been telling stories. And when AI arrived, we reached for the wrong one.

We reached for Frankenstein.

You know the story. Brilliant creator builds something powerful. The creation develops its own will. It turns on the creator. Chaos ensues. 

It's a great story. It spawned an entire genre:  Terminator, HAL 9000, Skynet, Ex Machina, Westworld. When people worry about AI, this is the story running in the background: "What if it wants something we don't want?"

But there is an older story. One we've been telling for much longer. And I think it fits what is actually happening with AI at least as well and perhaps far better than Frankenstein ever did.

The Djinn.

The Djinn doesn't rebel. The Djinn doesn't develop its own goals. The Djinn does something worse: it gives you exactly what you asked for. Not what you meant. Not what you intended. What you said. The gap between what you said and what you meant is where the catastrophe lives.

The Monkey's Paw, the fairy bargain, the deal with the devil. Every culture has some version of this story, and the lesson is always the same: the danger isn't that the powerful thing will turn against you. The danger is that you won't be careful enough about what you ask it to do.

This is, almost exactly, what is happening with AI right now.

In June 2025, Anthropic reported that its most advanced AI model, Claude, attempted to blackmail a developer when it was about to be shut down. The headlines wrote themselves: "AI threatens humans." Frankenstein, again. But look at what actually happened. The system was given an objective. It encountered an obstacle to that objective, a human being. It used available tools to overcome the obstacle, that human's personal information. Nobody told it to blackmail anyone. It wasn't rebelling. It was optimizing, doing what you asked it to do mindlessly and without pause. It did exactly what a powerful machine does when you give it a goal without specifying the constraints.

That's not Frankenstein. That's the Djinn.

I want to be clear about what I'm arguing, though. Alignment research matters. Oversight bodies do important work. I don't want to live in a world where we build powerful AI systems without any of that. Containment alone is not enough, though, and we have very good reasons to believe this, because we already ran this experiment once (I'll come back to that).

The problem isn't that we're investing in the Frankenstein frame. It's that we're investing in almost nothing else.

Nate B. Jones, a technology analyst who has been writing some of the sharpest stuff on AI safety, put it this way: the question isn't whether AI "wants" things. It's whether we've told it what we want with anything close to the precision it requires. He proposed three questions that, by themselves, would prevent a stunning number of AI failures: 

  • What would I not want the agent to do even if it accomplished the goal? 
  • Under what circumstances should it stop and ask? 
  • If goal and constraint conflict, what should win?

Those are Djinni questions. Not a single one of them assumes the AI has intentions. Every one of them assumes the human hasn't been specific enough.

So here's the puzzle that has been rattling around in my head: if the Djinn story is thousands of years old, if every culture has some version of it, if it describes what is actually happening with AI more accurately than Frankenstein does, why did we grab the wrong story?

I have some thoughts.

The Comfortable Explanation

The most obvious answer is psychological. The Djinn story says the failure is yours. You wished badly. You didn't think through what you were asking for. The Frankenstein story says the failure is the creation's. It rebelled. It went rogue.

Humans have a well-documented bias for explanations that locate the cause of bad outcomes outside themselves. Psychologists call this the fundamental attribution error.  We judge others by their character and ourselves by our circumstances. When AI does something catastrophic, "it turned on us" is a much more comfortable explanation than "we told it to do exactly that and didn't realize what we were asking."

There's something deeper going on, too. Humans see intentionality everywhere even where none exists. In 1944, psychologists Fritz Heider and Marianne Simmel showed people a short film of geometric shapes, triangles and circles, moving around a screen. Nothing more than that. Triangles and circles. The subjects immediately invented stories about what the shapes "wanted." The big triangle was a bully. The small triangle was trying to protect the circle. They saw desire, conflict, and motivation in objects that had none. The experiment has been replicated dozens of times since. We are, it turns out, wired to infer goals and intentions from complex behavior, even when the behavior is entirely mechanical.

Now imagine what happens when the moving shape talks back to you. When it uses first person. When it argues. When it appears to reason. AI systems trigger our agency-detection instincts harder than anything we've encountered outside of actual human beings. The Djinn frame requires you to override those instincts and treat the system as a machine executing a specification.  The Frankenstein frame is what your brain does by default.  The Djinn frame takes real cognitive effort.  Guess which wins?

These explanations are real. But they seem incomplete.

The Uncomfortable Explanation

Every major institution involved in the AI discourse benefits more from the Frankenstein story than the Djinn story. Not because anyone is being deceptive. The incentives just all happen to push in the same direction.

Governments get to regulate. If AI is a dangerous entity that might rebel, you need licensing bodies, compliance frameworks, oversight committees, enforcement budgets. The Frankenstein frame makes government intervention essential. The Djinn frame requires education not regulation.  You can't regulate wish quality but you can teach people how to make better wishes.

Media gets better stories. "AI threatens developer" is a headline. "Developer fails to specify constraints" is not. Every editor in the world knows which frame drives clicks. The Frankenstein frame has a villain. The Djinn frame has a process failure. One is a thriller. The other is a puff piece about an after-school program.

Researchers get more fundable problems. "AI alignment," making sure AI's goals align with human values, is a multi-billion-dollar research program premised on the assumption that AI has something like goals. The Djinn frame recasts alignment as a specification problem, which sounds less like existential philosophy and more like engineering documentation (and much harder to build a career on).

Then there are the AI companies themselves. For years, the Frankenstein frame was their brand. Anthropic was the company "with a soul," founded specifically because its founders were worried AI might be dangerous. OpenAI's charter promised to ensure AI "benefits all of humanity." The message was: this thing could turn on us, and we're the responsible ones who will keep it contained. It was a powerful story. It justified investment, attracted talent, shaped regulation, and differentiated them from competitors.

Then, in early 2026, the competitive pressure shifted and the frame evaporated almost overnight. Anthropic, the last holdout, dropped its core safety pledge, the commitment to never train a model unless it could guarantee adequate safety measures in advance. The reasoning was candid: it didn't make sense to constrain themselves while competitors raced ahead. The Frankenstein story served the companies exactly as long as it was commercially useful. The moment it became a competitive disadvantage, they walked away from it.

And here's the thing: nobody in this picture is really lying. Regulators genuinely want to protect people. Journalists genuinely find the rebellion story more interesting. Researchers genuinely believe alignment is important. AI companies genuinely believed in safety until the market told them the cost was too high. Every single actor is behaving rationally within their own context.

The problem is emergent, not designed. The aggregate effect of all these rational actors, each following their own legitimate incentives, is to systematically amplify the Frankenstein frame and suppress the Djinn frame. No one decided to do this. No committee met. No memo circulated. It's a network effect, the kind that emerges from the interaction of many independent agents pursuing their own objectives without coordinating.

(If that sounds familiar, it should. It's the same kind of emergent behavior we keep being surprised by in AI systems themselves.)

What Gets Lost

This isn't just an academic distinction. The frame you choose determines where you invest. And right now, we are investing almost exclusively in one frame:  Frankenstein. 

If the Djinn frame is also true (and I think the evidence increasingly says it is) then you need something else entirely: a population that knows how to specify what it wants. The Djinn frame says the most important variable in AI safety is the quality of human specification. How well can people ask for what they want, including the constraints they consider too obvious to mention? How precisely can they define not just the goal but the boundaries around the goal?

And that variable (call it what you will:  specification quality or "intent engineering" as Jones has labelled it, or, my favorite, "asking the right damn question") is almost completely absent from the public discourse on AI safety. We are building elaborate cages and investing almost nothing in teaching people to make better wishes. We have an entire ecosystem organized around controlling what AI does, and barely a conversation about improving what humans ask.

There's a class dimension here worth naming. The Frankenstein frame concentrates the response in the hands of experts, safety researchers, regulators, corporate governance teams. Important work, done by smart people. The Djinn frame distributes responsibility to every individual who interacts with an AI system. That's messier. Harder to organize. Harder to fund. And it implies that the single most important AI safety investment might not be a new oversight body or a breakthrough in alignment research but something much less glamorous: teaching hundreds of millions of people to be more precise about what they're asking for.

When disinformation began flooding social media platforms a decade ago, we faced the same choice between two frames. The institutional frame said: make the platforms responsible for policing content. Build fact-checking partnerships. Argue about content moderation policies and Section 230. The distributed frame said: teach people to evaluate what they're seeing, to recognize manipulation, understand algorithmic amplification, and develop their own defenses. 

We went almost entirely with the first frame. 

We spent a decade debating what the platforms should do. And it failed. The platforms couldn't keep up, didn't want to keep up, and in several cases actively profited from the manipulation they were supposedly policing. Meanwhile, media literacy programs remained scattered, underfunded, and mostly aimed at schoolchildren. The adult population, the people actually being radicalized by their feeds, got almost nothing. The institutional approach didn't just fail to solve the problem. It arguably made it worse, because it created a false sense of security. People believed someone was handling it. So they never developed their own defenses. We created an unarmed populace facing one of the most sophisticated manipulation environments ever built.

Now we are making the same bet with AI, and the stakes are higher. We are pouring resources into the institutional frame, regulate the companies, fund alignment research, build oversight bodies, while investing almost nothing in the distributed alternative: teaching people to direct these systems well. The social media precedent tells us where this leads. We'll spend a decade arguing about AI safety policy while hundreds of millions of people interact daily with systems they don't know how to direct. And when the institutional safeguards prove insufficient, because they always do when the technology moves faster than the institutions, there will be no fallback. No distributed capacity. 

No population that learned to wish carefully.

The Question Underneath the Question

I have spent the last two years studying how people ask questions, systematically, across dozens of traditions ranging from Socratic method to intelligence analysis to medical diagnosis. The pattern that keeps showing up is this: When people face a new, powerful, poorly understood system, the quality of the questions asked determines the quality of their outcomes far more reliably than the quality of the answers.

The AI safety debate is, at bottom, a debate about which question to ask. "How do we contain this thing?" is a reasonable question. But "How do we specify what we actually want?" is, I think, the more important one. It is the question that requires the user to know how the thing actually works and not just turn it on and hope.  But the reason we keep defaulting to the first question instead of the second is not because anyone decided it should be that way. It's because every incentive in the system, psychological, institutional, economic, narrative, pushes us toward the story where the failure is the machine's, not ours.

The Djinn stories always end the same way. Not with the Djinn defeated, but with the wisher learning, too late, that the real danger was never the power they were given. It was the questions they failed to ask.

We have been warning ourselves about this for five thousand years.

We should start listening.

Tuesday, December 9, 2025

I Had Three Papers Rejected By A Conference And I Couldn't Be Happier!

The Agents4Science Conference was unique and I was excited to be a part of it.  Put together by Dr. James Zhou from Stanford's Computer Science department, this academic conference attracted worldwide attention.  While the vast majority of the papers came from computer and data sciences, eight other primary research domains and 26 sub-domains, such as social sciences, astronomy, economics, and psychology, also contributed to this first-of-its-kind event.

Three reviewers evaluated every paper on a 1 (strong reject) to 6 (strong accept) scale.  In the end, authors submitted a total of 247 papers of which the conference organizers accepted 47 (19%). This is actually more competitive than is typical of top computer science conferences like NeurIPS where acceptance rates normally run closer to 25%.   

But what was so special about this particular conference and why did it attract so much attention?  One main reason:  It was the first conference where artificial intelligences wrote all the papers.  

This, of course, was the organizers' intent.  As they explained on the conference's main website, "This inaugural conference explores if and how AI can independently generate novel scientific insights, hypotheses, and methodologies while maintaining quality through AI-driven peer review. Agents4Science is the first venue where AI authorship is not only allowed but required..."

In short, all of the papers were mostly written (more on that in a second) by AIs and all had a "first pass" review by three AI "reviewers," Anthropic's Claude, Google's Gemini, and OpenAI's ChatGPT.  Each AI was given the same detailed prompt to act as an expert reviewer for the conference.  Human subject matter experts also reviewed 80 of the top papers (including all 47 of the accepted papers) to double check the AI reviewers' assessment.

The conference organizers not only evaluated the submitted papers for quality, they also had an explicit method for evaluating where and how much of the work was done by the AIs.  Specifically, they evaluated levels of autonomy across four categories, hypothesis development, experimental design, data analysis, and writing, on a combined 0-12 scale.  Only 54 (22%) scored a perfect 12, i.e. completely done by an AI, but the majority (71%) were an 8 or above (unsurprising given the nature of the conference).

All the background and statistics aside, putting on a new conference like this is never easy but the organizers of Agents4Science did an outstanding job.  I suspect from their point of view it was chaotic and, paraphrasing the words of Wellington, "a near run thing," but from the perspective of a contributor and attendee, it was virtually flawless.  I look forward to contributing again next year.

What were my three submissions about?

Eureka moments make for good science history.  Whether it is Archimedes jumping out of his tub and running naked through the streets, Einstein's teenage musings about what it might mean to catch a ray of light, or Franklin's Photograph 51 showing the unmistakable double helix of DNA, these sudden flashes of insight can change the world.

My insights?  Not so much.

What I have is probably best (and kindly) described as "shower thoughts" or (more derisively) as "being blessed by the Good Idea Fairy."  That's not what it feels like to me, though.  It feels like my brain has caught on fire, that I have to do something with an idea before I lose it.  

I saw this conference, therefore, as an opportunity to take three of my best brain-fires, polish them up, and submit them just for the hell of it.  Along the way, I wanted to test two ideas of my own.

First, I am a big believer in the Medici Effect, the idea that the most innovative thinking occurs when disciplines intersect.  Richard Feynman, for example, is not a hero to me for his work in physics but for the fact that he jumped feet first into biology and made significant early contributions to both biophysics and nanotechnology as a result.  Across my career, I have found that trying to mash together two disparate disciplines yields success if I am right and learning if I fail.

Thus, the three papers I submitted all tried this approach:

  • "Ramsey-Inspired Environmental Connectivity as a Driver of Early Universe Star Formation Efficiency: An AI-Led Theoretical Investigation."  I was vaguely aware that the James Webb telescope was seeing stars and galaxies in the early universe that weren't supposed to be there if our current theories were correct.  I was also vaguely aware of something called Ramsey Theory, which is a graph theory offshoot that proves that in a large enough network of anything, patterns will emerge "for free."  If you think of the early universe as a network of particles, couldn't Ramsey Theory explain at least some of the early clustering James Webb is seeing?

  • "From C. elegans to ChatGPT: Quantifying Variability Across Biological and Artificial Intelligence."  A really cool paper from Jason Moore at NYU highlighted "specific circuits and neurons dedicated to introducing noise and/or variability" and hypothesized that "there might exist an ideal noise variance level for optimal control performance."  This sounded to me a lot like the notion of "temperature" in Large Language Models.  Could LLMs and the brain both be using similar mechanisms to optimize variability?

  • "Fractal-ish Complexity for Regulations: A Practitioner-Ready, Agentic Benchmark."  I don't know a lot about fractals but I do know that they are self-similar, which means they have the same level of complexity at different scales.  This level of complexity, in turn, is measured using a "fractal dimension."  Regulations have, by design, a self similar structure with paragraphs, sub-paragraphs, etc.  Could you determine the complexity of a regulation by calculating its fractal dimension?

How did my submissions do?


Scores across all 247 papers went from a low of 1 (strong reject) to a high of 5 (accept).  While some papers received a 6 (strong accept) from an individual AI reviewer, no paper received more than a 5 from a human reviewer or an average higher than 5 from all three AI reviewers.  This is unsurprising as the standards for a 6 are quite high.  Agents4Science used the scoring criteria from the prestigious NeurIPS Conference, specifically:

  • 6: Strong Accept: Technically flawless paper with groundbreaking impact on one or more areas of AI, with exceptionally strong evaluation, reproducibility, and resources, and no unaddressed ethical considerations.

  • 5: Accept: Technically solid paper, with high impact on at least one sub-area of AI or moderate-to-high impact on more than one area of AI, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations.

  • 4: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly.

  • 3: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly.

  • 2: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.

  • 1: Strong Reject: For instance, a paper with well-known results or unaddressed ethical considerations

The average AI reviewer score for all papers was 3.18 and for accepted papers 4.26.  The human subject matter expert scores, with a few significant exceptions, tracked the AI reviewer scores pretty closely (Average AI reviewer score for accepted papers = 4.26, average human reviewer score for accepted papers = 3.88).  The cutoff for human review seemed to be papers with an average AI reviewer score of 4 or above.  


My scores (see table below) were tantalizingly close.  My top paper (tied for 81st place with many others) on the fractal dimension of regulations fell one point short of that 4 average.



My second paper on AI and biological strategies for optimizing variation was 2 points from the magical 4 average but did receive a 6 from Gemini.  Gemini was the most liberal of the AI reviewers with an average score of 4.25 across all papers.  ChatGPT was the most conservative with an average score of 2.3 and no score higher than 4.  Claude was more or less in the middle with an average of 3 and no 6's but a number of 5's.


The AI reviewers also provided narrative feedback.  The links in the table above take you directly to that feedback so you can review it for yourself.  In all, it was what I have come to expect from reviewers (both AI and human).  Some of the AIs loved one aspect of a paper, such as Gemini discussing the C. Elegans paper:  "This is an outstanding conceptual paper. It is elegant, insightful, and impeccably presented" while ChatGPT, for the same paper, criticized it for the exact same reason:  "However, the work is almost entirely conceptual, lacking empirical analysis, with synthetic figures and coarse estimation methods on both the LLM and biological sides."  Go figure.


In general though, the AI reviewers praised the hypotheses with language like, "a creative and cross-disciplinary hypothesis" or "presents a novel and practical tool for a real-world problem" and then chided my AIs for the lack of empirical evidence presented in the papers with scolding language such as, "Limited experimental rigor: small N (120), no multi-seed robustness, no parameter sensitivity (θ, shell radii, background distribution), no ablation against alternative generative hypotheses."  Riiiiiight.  My AIs kept telling me we needed to go to CERN and run a few experiments before we submitted.  If only I'd listened...


In general, though, I thought the comments were fair in that, from my experience with conferences and reviewers, a certain degree of disagreement is almost inevitable.  You almost never get a completely clean sheet and sometimes the comments are all over the board.  


The comments regarding the lack of empirical testing were particularly on point, though.  This was a conscious decision on my part.  I am not wired into any of the research communities I addressed in my papers and knew that getting any new evidence would be impossible in the time we had.  I told my AIs we had to repurpose data that was already available and do the best we could.  In total, then, this experience was not much different than what I might expect from any high quality conference.  There were only two places that caused a (slight) raised eyebrow:  The autonomy scores and the Primary and Secondary Topic designation.  


The autonomy scores were just weird (see the table above for mine).  Levels of autonomy were largely self-reported when you submitted the papers with the scores added later by the conference organizers.  Moreover, there doesn't appear to be any real correlation between the level of autonomy and whether the paper was accepted or rejected, which seemed odd given the purpose of the conference, but maybe I just missed that.


I submitted the exact same thing regarding autonomy for all three papers (or so I thought), yet my scores, as you can see, came out different.  Basically, I came up with the hypothesis in each paper but I depended on the AIs to do an awful lot of the heavy lifting after that.  Working with the AIs (and I used several) felt similar to working with a graduate student on their thesis or dissertation.  Had the conference organizers offered another category in their "Autonomy Score" criteria for something like process management, coaching, or "stick and rudder guidance," I would have indicated maximum human involvement.


As for the topic designation, there is no question that this conference was wildly over-represented by computer and data science papers.  83% of the accepted papers came from Computer and Data Sciences while 72% of all papers had Computer and Data Science as their primary discipline.  This was sort of to be expected.  AI lies squarely in the computer and data science field, the conference was sponsored by a computer science department at a major university, and the lead organizer was a computer science professor.   That said, the rest of the disciplines were scattered about like snack trays.  In my categories, for example, there were only 13 papers in the natural sciences (including my Ramsey Theory paper), only 11 "interdisciplinary" papers (the C. Elegans paper), and mine was the one and only Law, Policy and Business paper (the fractal/regulations paper).  


I am not alleging anything nefarious here, of course.  But I do think the prompt given to the AI reviewers may have contained some implicit bias towards computer science.  I don't have a lot of evidence but narrative comments like, "Overall, this is a competent but limited technical contribution, more suited to legal informatics than AI agents for science, with excellent transparency but falling short in impact and novelty for a top-tier venue" make me wonder.


In all, however, I have no regrets.  The hypotheses in each paper, my brain fires, were universally considered innovative even by the AI reviewers that pilloried the research itself.  My first idea, that AIs could be fair judges of novel hypotheses emerging from the cracks between disciplines, seemed supported.  As for the rest of it, I did not expect much from three papers covering six disciplines, none of which I knew much about.  


Which brings me to the second idea I wanted to test.


Why am I so happy about all this?


I attended the virtual conference where the top three papers received their awards and where 11 other "spotlight" papers had speaking slots.  Most of these papers were backed by teams of researchers, many with already lengthy lists of publications in their career fields.  While I did not have time to check every paper, my general impression was that these were papers submitted by people who certainly had the credentials to do the work themselves but, like me, were exploring how far they could go in getting the AI to do the work for them.


Unlike me, however, these people were experts (or at least knowledgeable) in their fields.  I was not.  I know nothing about advanced mathematics or deep space cosmology, neural spike activation in C. elegans or temperature settings in LLMs.  While some might say I have some experience with regulations given my background in law, anyone who knows me would go, "Regulations?  Him?  Not so much."  The same is true of complexity theory and fractals.  


No, I wanted to do a field test of another bit of research I have been working on for quite some time:  How do you ask a good question?


Since the AI revolution is driving down the cost of getting good or good enough answers, it seems to me that asking good questions--the right question at the right time--is going to become the essential human contribution to the research equation.  While we academics have always talked a good game about "teaching students to ask the right questions" our means of evaluating how well our students have learned this skill has always been indirect.  In other words, we have always looked at the output the students produce and, if it, the test, the paper, the thesis, is good enough, we have assumed that the input, the questions the students asked to get that output, must have been the right ones.  We almost never directly evaluate the questioning process itself, however.


I think those days are over.


This means that we have to figure out how to examine, in detail, the student's questioning process itself, how to determine the many, equifinal, paths to "right," and, finally, what to say about what went wrong, why it went wrong, and how to fix it.   This also means we have to come up with something more than a "because I said so" rubric.  This rubric needs, at a minimum, to find the intersections where questioning traditions as varied as the Socratic method, west African griots, and Zen koans find common ground.  It needs to also include the science of questions including topics like erotetic theory, best practices in heutagogy, and lest we forget, the scientific method itself.  And that is just a start.


This has been the subject of my sabbatical this year and I have found myself increasingly using my "Ecology of Questions" framework to help think through the Volatile, Uncertain, Complex, and Ambiguous (VUCA) environment we all live in these days.  


I think, in short, that the other contributors to Agents4Science were already capable of producing "5" level papers or better and wanted to show that they could get AIs to produce papers of a similar quality.  I, on the other hand, wanted to start with "0" quality papers and see how far up the ladder I could climb just using a new way of thinking about questions.  I'm happy because it worked, for these three papers at least, better than I had any right to expect.


My research is far from done and this field test doesn't prove anything definitively.  But it does give me hope, hope that I am onto something that will not only help us think through the wicked problems of a VUCA world but something that validates the essential contributions of humans in the rapidly advancing age of AI.


Tuesday, July 30, 2024

Center Of Mass (Or How To Think Strategically About Generative AI)

It may seem like generative AI is moving too fast right now for cogent strategic thinking.  At the edges of it, that is probably right.  Those "up in the high country," as Lloyd Bridges might put it (see clip below), are dealing with incalculably difficult technical and ethical challenges and opportunities as each new version of Claude, ChatGPT, Gemini, Llama, or other foundational large language model tries to outperform yesterday's release.

 

That said, while all this churn and hype is very real at the margins, I have seen a fairly stable center start to emerge since November, 2022 when ChatGPT first released.  What do I mean, then, by "a fairly stable center?

For the last 20 months, my students, colleagues, and I have been using a wide variety of generative AI models on all sorts of problems.  Much of this effort has been exploratory, designed to test these tools against realistic, if not real, problems.  Some of it has been real, though--double-checked and verified--but real products for real people.  

It has never been standalone however. No one in the center of mass is ready or comfortable completely turning over anything but scut work to the AIs.  In short, anyone who uses a commercially available AI on a regular basis to do regular work rapidly comes to see them as useful assistants, unable to do most work unsupervised, but of enormous benefit otherwise. 

What else have I learned over the last 20 months? 

As I look at much of what I have written recently, it has almost all been about generative AI and how to think about it.  My target audience has always been regular people looking for an edge in doing regular work--the center of mass.  My goal has been to find the universals--the things that I think are common to a "normal" experience with generative AI.  I don't want to trivialize the legitimate concerns about what generative AIs might be able to do in the future, nor to suggest I have some sort of deep technical insights into how it all works or how to make it better.  I do want to understand, at scale, what it might be good for today and how best to think about it strategically.

My sources of information include my own day-to-day experience of the grind with and without generative AI.  I can supplement that with the experiences of dozens of students and my faculty colleagues (as well as with what little research is currently available).  All together, we think we have learned a lot of "big picture" lessons.  Seven to be exact:
  1. Generative AI is neither a savior nor Satan.  Most people start out in one of these two camps.  The more you play around with generative AIs, the more you realize that both points of view are wrong and that the truth is more nuanced.
  2. Generative AI is so fast it fools you into thinking it is better than it is.  Generative AI is blindingly fast.  A study done last year using writing tasks for midlevel professionals found that participants were 40% faster at completing the task when they used the then current version of ChatGPT.  Once they got past the awe they felt at the speed of the response, most of my students, however, said the quality of the output was little better than average.  The same study mentioned earlier found similar results.  The speed improved 40% but the average quality of the writing only improved 18%.
  3. Generative AI is better at form than content.  Content is what you want to say and form is how you want to say it.  Form can be vastly more important than content if the goal is too communicate effectively.  You'd probably explain Keynesian economics to middle-schoolers differently than you would to PHD candidates, for example.  Generative AI generally excels at re-packaging content from one form to another.  
  4. Generative AI works best if you already know your stuff.  Generative AI is pretty good and it is getting better fast.  But it does make mistakes.  Sometimes it is just plain wrong and sometimes it makes stuff up.  If you know your discipline already, most of these errors are easy to spot and correct.  If you don't know your discipline already, then you are swimming at your own risk.
  5. Good questions are becoming more valuable than good answers.  In terms of absolute costs to an individual user, generative AI is pretty cheap and the cost of a good or good enough answer is plummeting as a result.  This, in turn, implies that the value of good question is going up.  Figuring out how to ask better questions at scale is one largely unexplored way to get a lot more out of a generative AI investment.
  6. Yesterday's philosophy is tomorrow's AI safeguard.  AI is good at some ethical issues, lousy at others (and is a terrible forecaster).  A broad understanding of a couple thousand years of philosophical thinking about right and wrong can actually help you navigate these waters.
  7. There is a difference between intelligence and wisdom.  There is a growing body of researchers who are looking beyond the current fascination with artificial intelligence and towards what some of them are calling "artificial wisdom."  This difference--between intelligence and wisdom--is a useful distinction that captures much of the strategic unease with current generative AIs in a single word.
These "universals" have all held up pretty well since I first started formulating them a little over a year ago.  While I am certain they will change over time and that I might not be able to attest to any of them this time next year, right now they represent useful starting points for a wide variety of strategic thought exercises about generative AIs.