Monday, October 30, 2023

The Catch 22 Of Generative AI

A true 3D chart done is the style of
Leonardo Da Vinci (Courtesy MidJourney)
I have always wanted to be able to easily build true 3D charts.  Not one of those imitation ones that just insert a drop shadow behind a 2D column and call it "3D," mind you.  I am talking about a true 3D chart with an X, Y and Z axis.  While I am certain that there are proprietary software packages that do this kind of thing for you, I'm cheap and the free software is either clunky or buggy, and I don't have time for either.

I was excited, then, when I recently watched a video that claimed that ChatGPT could write Python scripts for Blender, the popular open source animation and 3D rendering tool.  I barely know how to use Blender and do not code in Python at all, but am always happy to experiment with ChatGPT.

Armed with very little knowledge and a lot of hope, I opened up ChatGPT and asked it to provide a Python script for Blender that would generate a 3D chart with different colored dots at various points in the 3D space.  I hit enter and was immediately rewarded with what looked like 50 or so lines of code doing precisely what I asked!

I cut and pasted the code into Blender, hit run, and...I got an error message.  So, I copied the error message and pasted it into ChatGPT and asked it to fix the code.  The machine apologized(!) to me for making the mistake and produced new code that it claimed would fix the issue.  

It didn't.

I tried again and again.  Six times I went back to ChatGPT, each time with slightly different error messages from Blender.  Each time, after the "correction," the program failed to run and I received a new error message in return.

Now, I said I didn't know how to code in Python, but that doesn't mean I can't code.  Looking over the error messages, it was obvious to me that the problem was almost certainly something simple, something any Python coder would be able to figure out, correct, and implement.  Such a coder would have saved a vast amount of time as, even when you know what you are doing, 50 lines of code takes a good bit of time to fat-finger.  

In other words, for generative AI to be helpful to me, I would need to know Python, but the reason I went to a generative AI in the first place was because I didn't know Python!  

And therein lies the Catch-22 of generative AI.  

I have seen this same effect in a variety of other situations.  I asked another large language model, Anthropic's Claude, to write a draft of a safety SOP.  It generated a draft very quickly and with surprising accuracy.  There were, however, a number of things that needed to be fixed.  Having written my fair share of safety SOPs back in the day, I was able to quickly make the adjustments.  It saved me a ton of time.  Without understanding what a good safety SOP looked like to begin with, however, the safety SOP created by generative AI risked being, well, unsafe.

At one level, this sounds a lot like some of my previous findings on generative AI such as "Generative AI is a mindnumbingly fast but incredibly average staff officer" or "Generative AI is better at form than content."  And it is.

At another level, however, it speaks to the need for an education system that is both going to keep up with advancements in generative AI while simultaneously maintaining pre-generative AI standards.  The only way, at least for now, to use generative AI safely will be to know more than the AI about the AI's outputs--to know enough to spot the errors.  The only way, in turn, to know more than generative AI is to learn it the old-fashioned way--grind through the material on your own until you are comfortable that you understand it.  Ironically, AI may be able to speed up the grind, but the learning is still on you.  

At another, deeper, level, it is more disturbing.  I worry that people will ask generative AI about things that they think they know but they don't.  Blender acted as a check on both my ignorance and the AI's errors in the first example.  My own experience with safety SOPs acted as a check on the AI in the second example.  What about areas such as political science, security studies, and military strategy where subjectivity reigns?  What if there aren't any checks on the answers generative AI produces?  Dumb questions will lead to incorrect answers which will lead to dumber questions and more incorrect answers--a sort of an AI powered, Dunning-Kruger death spiral.  

This mirrors, of course, one of the many concerns of AI experts.  I also know that there are many good people working hard to ensure that these kinds of scenarios rarely if ever play themselves out.  That said, I am reminded of an old Mark Twain saying that was a near perfect forecast of the problems with social media:  “A lie can travel halfway around the world while the truth is putting on its shoes.”  Perhaps that should be updated for the modern age:  "An AI energized chain reaction of stupid can destroy the world while the prudent are still slipping on their crocs."  

Not as catchy, I suppose, but equally prescient?

No comments: