Tuesday, December 12, 2023

Forget Artificial Intelligence. What We Need Is Artificial Wisdom

I have been thinking a lot about what it means to be "wise" in the 21st Century.

Wisdom, for many people, is something that you accrue over a lifetime.  "Wisdom is the daughter of experience" insisted Leonardo Da Vinci.  Moreover, the sense that experience and wisdom are linked seems universal.  There's an African proverb, for example, of which I am particularly fond that claims, "When an old person dies, a library burns to the ground."  

Not all old people are wise, of course.  Experience sometimes erodes a person, like the steady drip-drip of water on a stone, such that, in the end, there is nothing but a damn fool left.  We have long had sayings about that as well.

Experience, then, probably isn't the only way to become wise and may not even be a necessary pre-condition for wisdom.  How then to define it?

One thing I do know is that people still want wisdom, at least in their leaders.  I know this because I asked my contacts on LinkedIn about it.  100 responses later virtually everyone said they would rather have a wise leader than an intelligent one.  

These results suggest something else as well:  That people know wisdom when they see it.  In other words, the understanding of what wisdom is or isn't is not something that is taught but rather something that is learned implicitly, by watching and evaluating the actions of ourselves and others.

Nowhere is this more obvious than in the non-technical critiques of artificial intelligence (AI).  All of these authors seem nervous, even frightened, about the elements of humanity that are missing in the flawed but powerful versions of AI that have recently been released upon the world.  The AIs, in their view, seem to lack moral maturity, reflective strategic decision-making, and an ability to manage uncertainty and no one, least of all the authors of these critiques, wants AIs without these attributes to be making decisions that might change, well, everything.  This angst seems to be a shorthand for a simpler concept, however:  We want these AIs to not just be intelligent, but to be wise.

For me, then, a good bit of the conversation about AI safety, AI alignment, and "effective altruism" comes down to how to define wisdom.  I'm not a good enough philosopher (or theologian) to have the answer to this but I do have some hypotheses.

First, when I try to visualize a very intelligent person who has only average wisdom, I imagine a person who knows a large number of things.  Their knowledge is encyclopedic but their ability to pull things together is limited.  They lack common sense.  In contrast, when I try to imagine someone who is very wise but of just average intelligence, I imagine someone who knows considerably less but can see the connections between things better and, as a result, can envision second and third order consequences.  The image below visualizes how I see this difference:

This visualization, in turn, suggests where we might find the tools to better define artificial wisdom, in network research, graph theory, and computational social science.

I also think there are some hints lurking in biology, psychology, and neuroscience.  Specifically in the study of cognitive biases.  Over the last 30 years or so, in many disciplines cognitive biases have come to be seen as "bad things"--predictable human failures in logical reasoning.  Recently, though, some of the literature has started to question this interpretation.  If cognitive biases are so bad, if they keep us from making rational decisions, then why aren't we all dead?  Why haven't evolutionary pressures weeded out the illogical?  

If you accept the premise that cognitive biases evolved in humans because they were useful (even if only on the savannahs of east Africa), then it sort of begs the question, 'What did they help us do?"

My favorite attempt at answering this question is the Cognitive Bias Codex (See image below).

By Jm3 - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=51528798

Here the authors grouped all of the known cognitive biases into four major categories sorted by what they helped us do:

  • What should we remember
  • What to do when we have too much information
  • What to do when there is not enough meaning
  • What to do when we need to act fast

Interestingly, all of these areas are new areas of research in the AI community (For examples see:  Intentional Forgetting in Artificial Intelligence Systems: Perspectives and Challenges and Intentional Forgetting in Distributed Artificial Intelligence).  

Even the need to act fast, which seems like something at which AI excels, becomes more about wisdom than intelligence when decomposed.  Consider some of the Codex's sub-categories within the need to act fast:

  • We favor simple-looking options and complete information over complex, ambiguous options.
  • To avoid mistakes, we aim to preserve autonomy and group status, and avoid irreversible decisions.
  • To get things done, we tend to complete things we've invested time and energy in.
  • To stay focused, we favor the immediate, relatable thing in front of us.
  • To act, we must be confident we can make an impact and feel what we do is important.

All of these seem to have more to do with wisdom than intelligence.  Furthermore, true wisdom would be most evident in knowing when to apply these rules of thumb and when to engage more deliberative System 2 skills.

As I said, these are just hypotheses, just guesses, based on how I define wisdom.  Despite having thought about it for quite some time, I am virtually certain that I still don't have a good handle on it.

But that is not to say that I don't think there is something there.  Even if only used to help communicate to non-experts the current state of AI (e.g. "Our AIs exhibit some elements of general intelligence but very little wisdom"), it can, perhaps, help describe the state of the art more clearly while also driving research more directly.  

In this regard, it is also worth noting that modern AI dates back to at least the 1950's, and that it has gone through two full blown AI "winters" where most scientists and funders thought that AI would never go anywhere.  In other words, it has taken many years and been a bit of a roller coaster ride to get to where we are today.  It would seem unrealistic to expect artificial wisdom to follow a different path but it is, I would argue, a path worth taking.

Note:  The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Army, Department of Defense, or the U.S. Government. 

No comments: