Showing posts with label Army War College. Show all posts
Showing posts with label Army War College. Show all posts

Monday, July 24, 2023

Generative AI Is Like A ...

This will make sense in a minute...
Don't worry!  I'm going to fill in the blank, but before I do, have you played around with generative AI yet?  

If not, let's solve that problem first.

Go to Peplexity.ai--right now and before your read any further--and ask it a question.  Don't ask it a question it can't know the answer to (like, "What did I have for lunch?"), but do ask it a hard question that you do know the answer to (or for which you are at least able to recognize a patently bad answer).  Then, ask Perplexity some follow up questions.  One or two should be enough.

Come back when you are finished.

Now rate the answers you got on a scale from 1-10.  One or two is a dangerous answer, one that could get someone hurt or cause real problems.  Give a nine or ten to an actionable answer, one that you could use right now, as is.

I have had the opportunity to run this exercise with a large number of people at a variety of conferences and training events over the last six months.  First, I consistently find that only about a third of the crowd have ever used any generative AIs (like Perplexity or ChatGPT) though that number seems to be going up (as you would expect) over time.

I have rarely heard anyone give an answer a one or two and always have at least a couple of people give the answer they received a nine or ten.  Other members of the each audience typically gave scores that range across the spectrum, of course, but the average seemed to be about a six.  

Yesterday, I gave this same exercise to about 30 people and there were no 1 or 2's and three people (10%) gave their answer a 9 or 10.  No one gave the answer less than a 5.  No one.  

While anecdotal, it captures a trend that has been thoroughly documented across a number of different domains:  Generative AI isn't hitting like a freight train.  It's hitting like one of those high-speed, Japanese bullet trains, vaporizing traditional paradigms so quickly that they still don't know that they are already dead (For example...).

Or is it?

Thanks to some forward-thinking policy guidance from the leadership here at the Army War College, I, along with my colleagues Dr. Kathleen Moore and LTC Matt Rasmussen, were able to teach a class for most of last year with the generative AI switch set to "on."  

The class is called the Futures Seminar and is explicitly designed to explore futures relevant to the Army, so it was perfectly appropriate for an exploration of AI.  It is also an all year elective course so we were able to start using these tools when they first hit the street in November 22 and continue to use them until the school year ended in June.  Finally, Futures Seminar students work on research questions posed by Army senior leaders, so lessons learned from this experience ought to apply to the real world as well.

We used generative AIs for everything.  We used them for brainstorming.  We used them to critique our analysis.  We used them to red-team.  We created our own bots, like DigitalXi, that was designed to take the perspective of Xi Jinping and answer our questions as he would.  We visualized using Midjourney and Dalle-2 (see picture above made with Midjourney).  We cloned people's voices and created custom videos.  We tapped into AI aggregation sites like Futurepedia and There's An AI For That to find tools to help create everything from custom soundtracks to spreadsheets.

We got lots of feedback from the students and faculty, of course, both formal and informal.  We saw two big trends.  The first is that people either start at the "AI is going to save the earth" end of the spectrum or the "AI is going to destroy the earth" end.  For people who haven't tried it yet, there seems to be little middle ground.  

The second thing we saw is that, over time and sort of as you would expect, people develop a more nuanced view of AI the more they use it.  

In the end, if I had to boil down all of the comments and feedback it would be, generative AI is like a blazingly fast, incredibly average staff officer.

Let me break that down a bit.  Generative AI is incredibly fast at generating an answer.  I think this fools people, though.  It makes it seem like it is better than it actually is.  On real world problems, with second and third order causes and consequences that have to be considered, the AIs (and we tried many) were never able to just nail it.  They were particularly bad at seeing and managing the relationships between the moving pieces of complex problems and particularly good at doing administrivia (I got it to write a great safety SOP).  In the end, the products were average, sometimes better, sometimes worse, but, overall, average.  That said, the best work tended to come not from an AI alone or a student alone, but with the human and machine working together.  

I think this is a good place for USAWC students to be right now.  The students here are 25 year military professionals who have all been successful staff officers and commanders.  They know what good, great, average, and bad staff work looks like.  They also know that, no matter what the staff recommends, if the commander accepts it, the work becomes the commander's.  In other words, if a commander signs off on a recommendation, it doesn't matter if it came from two tired majors or a shiny new AI.  That commander now owns it.  Finally, our students are comfortable working with a staff.  Seeing the AI as a staff officer instead of as an answer machine is not only a good place for them to be mentally, but also likely to be the place where the best work is generated.

Finally, everyone--students and faculty alike--noted that this is where AI currently is.  Everyone expects it to get better over time, for all those 1's and 2's from the exercise above to disappear and for the 9's and 10's to grow in number.  No one knows what that truly means, but I will share my thoughts on this in the next post. 

While all this evidence is anecdotal, we also took some time to run some more formal studies and more controlled tests.  Much of that is still being written or shopped around to various journals, but two bits of evidence jumped out at me from a survey conducted by Dr. Moore.

First, she found that our students, who had worked with AI all year, perceived it likely to be 20% more useful to the Army than the rest of the student body (and 31% more useful than the faculty).  Second, she also found that 74% of Futures Seminar students walked away from the experience thinking that the benefits of developing AI outweigh the risks with only 26% unsure.  General population students were much more risk averse with only 8% convinced the benefits outweigh the risks with a whopping 55% unsure and 37% saying the risks outweigh the benefit.

This last finding highlights something of which I am now virtually certain:  The only real way to learn about generative AI is to use it.  No amount of lecture, discussion, powerpoints, what have you will replace just sitting down at a computer and using these tools.  What you will find is that your own view will become much more informed, much more quickly, and in much greater detail than any other approach you might take to understand this new technology.

Gaining this understanding is critical.  Generative AI is currently moving at a lightning pace.  While there is already some talk that the current approach will reach a point of diminishing returns in the future due to data quality, data availability, and cost of training, I don't think we will reach this point anytime soon.  Widely applicable, low-cost AI solutions are no longer theoretical.  Strategic decisionmakers have to start integrating their impact into their plans now.

Monday, July 22, 2019

I Made It!

I started my new job as Professor of Strategic Futures at the US Army War College last week.  So far, it has been a fairly predictable, if seemingly unending, series of orientations, mandatory trainings, and security briefings.  I don't mind.  To paraphrase Matthew, "What did I go into the Army to see?  A man running without a PT belt?"

What I have been impressed with is the extraordinary depth of knowledge and genuine collegiality of the faculty.  It is an interesting feeling to be constantly surrounded by world class experts in virtually any domain.

Equally impressive is the emphasis on innovation and experimentation.  I am surrounded by an example of this right now.  I am writing this post on one of a number of open access commercial network machines in the War College library.  In the back of the room, a professor is leading an after action review of an exercise built around Compass Games' South China Sea war game (BTW, if you think it odd that the Army would have students play a scenario which is largely naval in nature, you are missing my point about innovation and experimentation). 

Scattered throughout the rest of the library are recently acquired, odd-shaped pieces of furniture designed to create collaborative spaces, quiet spaces, and resting spaces (among others).  Forms soliciting feedback suggest that the library is working hard to figure out what kind of spaces its patrons want, and what kind of furniture and equipment would best support those needs.  In the very rear of the building, there is a room undergoing a massive reconstruction.  No telling what is about to go in there, but it is clear evidence that the institution is not standing still.  

I will continue to write here on Sources and Methods, of course.  I also hope to get a few things published on the War College's own online journal, The War Room  (Check it out if you haven't.  It's very cool). Other than that, I look forward to pursuing some of my old lines of research and adding a few new ones as well.

For those of you who want to contact me, you can call me in my office at 717-245-4665, email me at kristan dot j dot wheaton dot civ at mail dot mil or, as always, email me at kris dot wheaton at gmail dot com.  You can also message me on LinkedIn.

Wednesday, March 2, 2011

New Thinking On How AFRICOM Can Help Increase Stability (Strategic Studies Institute)

The Army War College's Strategic Studies Institute has just published what I think is an extremely interesting and insightful monograph about the kinds of engagement activities AFRICOM needs to consider in order to help increase stability in sub-Saharan Africa.

Titled, "Civilian Skills for African Military Officers to Resolve the Infrastructure, Economic Development, and Stability Crisis in Sub-Saharan Africa," and authored by Diane Chido, this 56 page monograph contains "a good bit of good thinking" and should be required reading for anyone in the defense cooperation, defense partnership or security assistance communities that support African missions.  

Intelligence professionals should find this volume interesting as well for its exploration, in case-study format, of previous stability efforts in Africa.  The author highlights a number of indicators that probably aren't normally considered but seem to be highly relevant to any peacekeeping and stability operations in which the US might participate in Africa.

It is free to download here.  Worth checking out. 

(Full disclosure:  Diane is a friend and frequent co-author of mine.  I had nothing to do with this monograph, however (she wouldn't even let me see a draft...).  I first took a look at it today and was, as I usually am, impressed with her grasp of the issue and the quality of her recommendations.  As someone who spent several years working defense cooperation issues, I think she has hit the nail on the head.  While I may be biased, I still think that, if you work African issues at all, you owe it to yourself to read this book.)