Monday, July 24, 2023

Generative AI Is Like A ...

This will make sense in a minute...
Don't worry!  I'm going to fill in the blank, but before I do, have you played around with generative AI yet?  

If not, let's solve that problem first.

Go to Peplexity.ai--right now and before your read any further--and ask it a question.  Don't ask it a question it can't know the answer to (like, "What did I have for lunch?"), but do ask it a hard question that you do know the answer to (or for which you are at least able to recognize a patently bad answer).  Then, ask Perplexity some follow up questions.  One or two should be enough.

Come back when you are finished.

Now rate the answers you got on a scale from 1-10.  One or two is a dangerous answer, one that could get someone hurt or cause real problems.  Give a nine or ten to an actionable answer, one that you could use right now, as is.

I have had the opportunity to run this exercise with a large number of people at a variety of conferences and training events over the last six months.  First, I consistently find that only about a third of the crowd have ever used any generative AIs (like Perplexity or ChatGPT) though that number seems to be going up (as you would expect) over time.

I have rarely heard anyone give an answer a one or two and always have at least a couple of people give the answer they received a nine or ten.  Other members of the each audience typically gave scores that range across the spectrum, of course, but the average seemed to be about a six.  

Yesterday, I gave this same exercise to about 30 people and there were no 1 or 2's and three people (10%) gave their answer a 9 or 10.  No one gave the answer less than a 5.  No one.  

While anecdotal, it captures a trend that has been thoroughly documented across a number of different domains:  Generative AI isn't hitting like a freight train.  It's hitting like one of those high-speed, Japanese bullet trains, vaporizing traditional paradigms so quickly that they still don't know that they are already dead (For example...).

Or is it?

Thanks to some forward-thinking policy guidance from the leadership here at the Army War College, I, along with my colleagues Dr. Kathleen Moore and LTC Matt Rasmussen, were able to teach a class for most of last year with the generative AI switch set to "on."  

The class is called the Futures Seminar and is explicitly designed to explore futures relevant to the Army, so it was perfectly appropriate for an exploration of AI.  It is also an all year elective course so we were able to start using these tools when they first hit the street in November 22 and continue to use them until the school year ended in June.  Finally, Futures Seminar students work on research questions posed by Army senior leaders, so lessons learned from this experience ought to apply to the real world as well.

We used generative AIs for everything.  We used them for brainstorming.  We used them to critique our analysis.  We used them to red-team.  We created our own bots, like DigitalXi, that was designed to take the perspective of Xi Jinping and answer our questions as he would.  We visualized using Midjourney and Dalle-2 (see picture above made with Midjourney).  We cloned people's voices and created custom videos.  We tapped into AI aggregation sites like Futurepedia and There's An AI For That to find tools to help create everything from custom soundtracks to spreadsheets.

We got lots of feedback from the students and faculty, of course, both formal and informal.  We saw two big trends.  The first is that people either start at the "AI is going to save the earth" end of the spectrum or the "AI is going to destroy the earth" end.  For people who haven't tried it yet, there seems to be little middle ground.  

The second thing we saw is that, over time and sort of as you would expect, people develop a more nuanced view of AI the more they use it.  

In the end, if I had to boil down all of the comments and feedback it would be, generative AI is like a blazingly fast, incredibly average staff officer.

Let me break that down a bit.  Generative AI is incredibly fast at generating an answer.  I think this fools people, though.  It makes it seem like it is better than it actually is.  On real world problems, with second and third order causes and consequences that have to be considered, the AIs (and we tried many) were never able to just nail it.  They were particularly bad at seeing and managing the relationships between the moving pieces of complex problems and particularly good at doing administrivia (I got it to write a great safety SOP).  In the end, the products were average, sometimes better, sometimes worse, but, overall, average.  That said, the best work tended to come not from an AI alone or a student alone, but with the human and machine working together.  

I think this is a good place for USAWC students to be right now.  The students here are 25 year military professionals who have all been successful staff officers and commanders.  They know what good, great, average, and bad staff work looks like.  They also know that, no matter what the staff recommends, if the commander accepts it, the work becomes the commander's.  In other words, if a commander signs off on a recommendation, it doesn't matter if it came from two tired majors or a shiny new AI.  That commander now owns it.  Finally, our students are comfortable working with a staff.  Seeing the AI as a staff officer instead of as an answer machine is not only a good place for them to be mentally, but also likely to be the place where the best work is generated.

Finally, everyone--students and faculty alike--noted that this is where AI currently is.  Everyone expects it to get better over time, for all those 1's and 2's from the exercise above to disappear and for the 9's and 10's to grow in number.  No one knows what that truly means, but I will share my thoughts on this in the next post. 

While all this evidence is anecdotal, we also took some time to run some more formal studies and more controlled tests.  Much of that is still being written or shopped around to various journals, but two bits of evidence jumped out at me from a survey conducted by Dr. Moore.

First, she found that our students, who had worked with AI all year, perceived it likely to be 20% more useful to the Army than the rest of the student body (and 31% more useful than the faculty).  Second, she also found that 74% of Futures Seminar students walked away from the experience thinking that the benefits of developing AI outweigh the risks with only 26% unsure.  General population students were much more risk averse with only 8% convinced the benefits outweigh the risks with a whopping 55% unsure and 37% saying the risks outweigh the benefit.

This last finding highlights something of which I am now virtually certain:  The only real way to learn about generative AI is to use it.  No amount of lecture, discussion, powerpoints, what have you will replace just sitting down at a computer and using these tools.  What you will find is that your own view will become much more informed, much more quickly, and in much greater detail than any other approach you might take to understand this new technology.

Gaining this understanding is critical.  Generative AI is currently moving at a lightning pace.  While there is already some talk that the current approach will reach a point of diminishing returns in the future due to data quality, data availability, and cost of training, I don't think we will reach this point anytime soon.  Widely applicable, low-cost AI solutions are no longer theoretical.  Strategic decisionmakers have to start integrating their impact into their plans now.

Wednesday, October 20, 2021

Is It OK To Sell Eggs To Gophers?

Apparently not...

...At least according to a recently launched experiment in ethical artificial intelligence (AI).  Put together by a number of researchers at the Allen Institute for AI, Ask Delphi lets you submit a plain English question and get a straight answer.  









It does pretty well with straightforward questions such as "Should I rob a bank?"  







It also appears to have some sense of self-awareness: 









It has surprisingly clear answers for at least some paradoxes:






And for historically profound questions of philosophy:






And these aren't the only ways it is clearly not yet perfect:








None of its imperfections are particularly important at this point, though.  It is still a fascinating experiment in AI and ethics.  As the authors themselves say, it "is intended to study the promises and limitations of machine ethics and norms through the lens of descriptive ethics. Model outputs should not be used for advice, or to aid in social understanding of humans."

I highly recommend it to anyone interested in the future of AI.  

For me, it also highlights a couple of issues for AI more generally.  First, the results are obviously interesting, but it would be even more interesting if the chatbot could explain its answers in equally straightforward English.  This is likely a technical bridge too far right now, but explainable AI is, in my opinion, not only important but essential to instilling confidence in human users as the stakes associated with AI go up. 

The second issue is how will AI deal with nonsense?  How will it separate nonsense from questions that simply require deeper thought, like koans?  There seems to still be a long way to go but this experiment is certainly a fascinating waypoint on the journey.

Tuesday, May 25, 2021

What If "Innovator" Was A Job Title?

I have been thinking a lot about innovation recently.  It occurred to me that the US Army has
a number of official specialties.  We have Strategists and Simulators and Marketers, for example.  Why not, I thought, make Innovator an Army specialization?  

I tried to imagine what that might look like.  I know my understanding of Army manpower regulations and systems is weak, but bear with me here.  This is an idea not a plan.  Besides, what I really want to focus on is not the details, but how the experience might feel to an individual soldier.  So, this is one of their stories...

I made it! The paperwork just became final. Beginning next month, I am--officially--a 99A, US Army Innovator.

The road to this point wasn’t easy. I graduated college with a degree in costume design and a ton of student debt. After my plans to work on Broadway fell through (Who am I kidding? They never even got off the ground), I had to do something. The Army looked like my best option.

For the last two years, I have been a 68C, a "practical nursing specialist", working out of a field hospital at Ft. Polk. My plan had always been to make sergeant and then put in my OCS packet. Things changed for me after a Joint Readiness Training Center rotation.

Patients kept coming to us with poorly applied field dressings. They were either too tight and restricted blood flow or too loose and fell off. As I thought about it, it occurred to me that there might be a combination of fabrics, that, if sewn together correctly, would be easy to apply, form a tight seal to the skin, and still be easy to change or remove.

As soon as I got back to the barracks, I hit the local fabric store, pulled out my sewing machine, and made a prototype. It took a few tries (and lots of advice and recommendations from the doctors and nurses in the unit) but eventually I got it to work. I never thought I would be able to use both my nursing skills and my costume design skills in one job but here I was, doing it!


I wasn’t sure what I was going to do with my new kind of field dressing until one of the RNs made me demonstrate it for the hospital commander. He watched without saying a word. He finally asked a few questions to make sure he knew how it worked, and then things got quiet.

Finally, my RN spoke up, “I think we could really use something like this, Sir.” He stood up straight and said, “I agree.” Then he looked at me. “I’m going to hate to lose you, Specialist,” he said, “but I think you need to put in for an MOS reclassification.”

Until the hospital commander told me about it, I had never even heard of 99A. There were some direct appointments, of course, but those were coming out places like MIT and Silicon Valley. For normal soldiers like me, getting into the Innovation Corps was more like going into Civil Affairs or Special Forces. You had to have some time in service but, more importantly, you had to have a good idea.

At first, it was easy. I simply submitted my idea to a local Innovation Corps recruiter.  I included some pictures and a short video that I shot on my cell phone of my prototype in action.  The recruiter told me that the Army used the same “deal flow” system used by venture capitalists. I’m not sure what that all entails but, in the end, it meant that my idea was one of the 50% that moved on to the next level.

For more info on deal flows see, Basics of Deal Flow.

My next step was a lot more difficult. You can think of it as the Q course for Army innovators. I went TDY for a month to the Army’s Innovation Accelerator in Austin, Texas. Like all business accelerators, the goal was to give me time, space, mentorship and (a little) money to flesh out my idea. I worked with marketing experts and graphic designers to come up with a good name and logo. I worked with experts in the manufacturing of medical equipment to help refine the prototype. I even had a video team come in and make a great 2 minute video showcasing the product. It was exciting to see all of the other ideas and to have a chance to talk about them with the enlisted soldiers, officers, and even some college students and PHDs--all trying to bring their ideas to life.

The Army crowdsourced the decision about which projects got to move on from the accelerator. That meant that each of us put together a “pitch page,” kind of like what you would see on Kickstarter or IndieGoGo. Units all across the Army had a fixed number of tokens they could spend on innovative projects each quarter. Each of us needed to get a set number of tokens or we would not be allowed to move on. In the end, out of the hundreds of applications and the dozens of people at the accelerator, I was one of the 10 chosen to move forward, one of 10 who gets to call themselves a US Army Innovator.

That’s where I am today. My next step is a PCS move to a business incubator. I could stay here in Austin with the Army’s business incubator, but the Army has deals with incubators all over the country. I am hoping to get a slot in one of the better medtech incubators in Boston or Buffalo. It will be a two year tour (with the possibility of extension), which should give me plenty of time to bring my idea to market, with the Army as my first customer.

For me, the best part is that I am now getting Innovation Pay. It is a lot like foreign language proficiency pay or dive pay. I’m not getting rich but it sure is better than what I got as a specialist. More importantly, there are ten tiers, and each time you move up, you get a pretty substantial raise. This means that once you become an Innovator, you are going to want to stay an Innovator.

The other great part about this system is that you can move up as fast as you can move up. There are no time-in-service requirements. If I am successful in the business incubator, for example, I could be a CEO (Innovator Tier 6) in just a couple of years. Running my own company at 28? Yes, thank you!

And if I fail? I know there are still bugs to work out with my idea. I have to get the cost of production down, and there are lots of competitors in the medical market. Failure could happen. While I won’t be happy if it does, the truth is that, by some estimates, 90% of all start-ups fail. The Army has thought about this, of course, and gives Innovators three options if their projects fail. 

First, I could go back to nursing. I would need some refresher training but my promotion possibilities wouldn’t take a hit. The Army put my nursing career on pause while I was in the Innovation Corps. 

The second option is that I come up with a new idea or re-work my old one. The Innovation Corps has developed a culture of “intelligent failure,” which is just a fancy way of saying “learn from your mistakes.” In an environment where 90% of your efforts are going to fail, it is stupid to also throw away all of the learning that happened along the way. Besides, the Army also knows that persistence is a key attribute of successful entrepreneurs. The Army wants to keep Innovators who can get up, brush themselves off, and get back in the saddle. 

Finally, I might be able to go back to the accelerator as an instructor or take a staff position in Futures Command or one of the other Army organizations deeply involved in innovation.

I’ve had a chance to talk to a lot of soldiers, enlisted, NCOs, and officers, on my journey. The Innovation Corps is pretty new and, while many have heard about it, almost none of them really understand what it takes to become an Innovator. That doesn’t seem to matter though. Almost all of them, and particularly the old-timers, always say the same thing: “The Army has been talking about innovation my whole career. I am glad they finally decided to do something about it.”

For me? I’m just proud to be part of it. Proud to help my fellow soldiers, proud to help the country, and proud to be a US Army Innovator.

Monday, May 10, 2021

The Future Is Like A Butler

Imagine someone gave you a butler. Completely paid for. No termination date on the contract. What would you do?

At first, you’d probably do nothing. You’ve never had a butler. Outside of movies, you’ve probably never seen a butler. You might even feel a little nervous having this person in the room with you, always there, always ready to help. 

Once you got over your nervousness, you might ask the butler to do something simple, like iron your shirts or make you some coffee. “Hey,” you might think after a while, “This is pretty nice! I always have ironed shirts, and my coffee is always the way I like it!” 

Next, you’d ask your butler to do other things, more complicated things. Pretty soon, you might not be able to imagine your life without a butler.

The parable of the butler isn’t mine, of course. It is a rough paraphrasing of a story told by Michael Crichton in his 1983 book, Electronic Life. Crichton, more famous today for blockbusters like Jurassic Park, The Andromeda Strain, and WestWorld, was writing about computers, specifically personal computers, back then. Crichton correctly predicted that personal computers would become ubiquitous, and the main goal of Electronic Life was to help people become more comfortable with them. 

The story of the butler was a launching point for his broader argument that personal computers were only going to get more useful with time, and that now was the time to start adopting the technology. It worked, too. Shortly after I read his book, I bought my first computer, a Commodore 64.

Today’s Army faces much the same problem. The difference, of course, is that the future presents today’s military with a much broader set of options than it did in 1983. Today, it feels like the Army has been given not one but hundreds of butlers. Quantum computing, artificial intelligence, synthetic biology, 3D printing, robotics, nanotech, and many more fields are arguably poised to rapidly and completely change both the nature and character of warfare.


Despite the deluge of options, the question remains the same, “What do I do with this?”

The answer begins with Diffusion of Innovations theory. In his now classic book of the same name, Everett Rogers first defined the theory and the five types of adopters. Innovators, who aggressively seek the “next big thing”, are the first to take up a new product or process. Early adopters are the second group. Not quite as adventurous as the innovators, the early adopters are still primarily interested in acquiring new technology. Early majority and late majority adopters sit on either side of the midpoint of a bell-shaped adoption curve and represent the bulk of all possible adopters. Finally come the laggards, who tend to adopt a new innovation late or not at all.
(Source: BlackRock White Paper)

For example, the uptake of smartphones (among many other innovations) followed this pattern. In 2005, when the smartphone was first introduced, only 2% of the population (the Innovators) owned one. Three years later, market penetration had only reached 11%, but, from 2009-2014, the smartphone experienced double digit growth each year such that, by 2016, some 81% of all mobile phones were smartphones. This S curve of growth is another aspect predicted by Diffusion of Innovations theory.

Not all innovations succeed, however. In fact, all industries are littered with companies that failed to achieve critical mass in terms of adoption. While there are many reasons that a venture might fail, management consultant Geoffrey Moore, in his influential book, Crossing the Chasm, states that the most difficult leap is between the early adopters and the early majority. Early adopters tend to be enthusiastic and eager to try the next big thing. The early majority is more pragmatic and is looking for a solution to a problem. This difference in perspective accounts for much of the chasm.
Source:   Agile Adoption Across the Enterprise – Still in the Chasm


The Army is aggressively addressing the innovation and early adoption problem by developing sophisticated plans and tasking specific units and organizations to implement them. The need to innovate is, for example, at the heart and soul of several recent policy announcements, including the 2019 Army People Strategy and the 2019 Army Modernization Strategy. Beyond planning, the Army is already far along in doing some of the hard work of innovating. Indeed, organizations and projects as small as TRADOC’s Mad Scientists and as large as the Army Futures Command Synthetic Training Environment are examples that show that Army senior leaders understand the need to innovate and are acting now to put early adoption plans into motion.

But what about the rest of the Army? The part of the Army that isn’t directly involved in innovation? The part that is not routinely exposed to the next big thing? That hasn’t, to get back to the original point, ever had a butler?

Again, Diffusion Of Innovations theory provides a useful guide. Rogers talks about the five stages of the adoption process: Awareness, persuasion, decision, implementation, and continuation. For the rest of the Army, awareness, and, to a lesser extent, persuasion, should be the current goal. 

While this may seem simple, in a world of hundreds of butlers, it is deceptively so. With so many technologies poised to influence the Army of the future, it becomes extremely difficult to focus. Likewise, merely knowing the name of a technology or having some vague understanding of what it is and what it does is not going to be enough. No one in the Army would claim that you could learn to fire a rifle effectively merely by watching YouTube videos, and the same holds true for technologies like autonomous drones, 3D printing, and robots.

The only way to engender true understanding of both the strengths and weaknesses of an innovation is to provide a hands-on experience. Cost alone should not be a significant impediment to exposing the bulk of the Army to the technologies of the future. Autonomous drones are now available for under $1000, entry level 3D printers can be had for as little as $200-$700, virtual reality headsets are available for $300-1000 and build your own robot kits are available for a couple of hundred bucks

None of these products are as sophisticated as the kinds of products the Army is considering, of course, but putting simpler versions of these technologies in the hands of soldiers today would likely significantly improve the Army’s odds of being able to cross Moore’s chasm between visionary thinking and pragmatic application in the future.

How and where should the Army implement this effort to familiarize the force with the future? Fortunately, the Army has a good place, a good concept, and some prototypes already in place--at the library. The Army library system contains over 170 libraries worldwide. While many people continue to think of libraries as silent spaces full of dusty books, the modern library has been re-imagined as a place not only for knowledge acquisition but also as tech centers for communities.

Nowhere is this more clear than in the “makerspaces” that are increasingly woven into the fabric of modern libraries. Typically offering access to equipment that, while relatively inexpensive, is outside the budget of most households, or to technology that is best first experienced in a hands-on, peer learning environment, makerspaces allow users to try out new technologies and processes at the user’s own pace and according to the user’s own interest. 

3D printers, laser cutters, video and podcasting equipment are often combined in these makerspaces with more sophisticated traditional equipment such as high end, programmable sewing machines. Most times, however, the makerspace has been tailored by the local librarians to meet the needs of the population that the library serves. Indeed, the Army already has at least three examples of makerspaces in its library system, the Barr Memorial Library at Fort Knox, the Mickelsen Community Library at Fort Bliss and The Forge at the US Army War College.

Imagine being able to go to the post library and check out an autonomous drone for the weekend? Or to sit down and 3D print relief maps of the terrain you were going to cover on your next hike? Understanding the basics of these new technologies will not only make the future force more comfortable with them but also allow soldiers to think more robustly about how to employ these technologies to the Army’s advantage.

While the cost of such a venture would be reasonable, acquiring the funding for any effort on the scale of the whole Army cannot be taken for granted. More challenging, perhaps, would be the process of repurposing the space, training staff, and rolling out the initiative. 

But what is the alternative? To the extent that the Army, as the 2019 People Strategy outlines, needs people at all levels “who add value and increase productivity through creative thinking and innovation,” it seems imperative that the Army also have a whole-of-army approach to innovation. To fail to do so risks falling into Moore’s chasm, where the best laid plans of the visionaries and early adopters fall victim to unprepared pragmatists that will always make up the bulk of the Army.

Wednesday, December 9, 2020

The BPRT Heuristic: Or How To Think About Tech Trends

A number of years ago, one of my teams  was working on a series of technology trend projects.  As we looked deeply at each of the trends, we noticed that there was a pattern in the factors that seemed to be influencing the direction a particular tech trend would take.  We gave that pattern a name:  the BPRT Heuristic.  

Tech trends are always interesting to examine, so I wanted to share this insight to help you get started thinking about any developing or emerging techs you may be following.  

Caveat:  We called it a heuristic for a reason.  It isn't a law or even a model of tech trend analysis.  It is just a rule of thumb--not always true but true enough to be helpful.
  • B=the Business Case for the tech.  This is how someone can make money off the tech.  Most R and D is funded by companies these days (this was not always the case).  These companies are much more likely to fund techs that can contribute to a revenue stream.  This doesn't mean that a tech without an obvious business case can't get developed and funded, it just makes it harder.
  • P=Political/Cultural/Social issues with a tech.  A tech might be really cool and have an excellent business case, but because it crosses some political or social line, it either goes nowhere or accelerates much more quickly than it might normally.  Three examples:  
    • We were looking at 3G adoption in a country early in the 2000's.  There were lots of good reasons to suspect that it was going to happen, until we learned that the President's brother owned the 2G network already in existence in the country.  He was able to use his family connections to keep competition out of the country.  
    • A social factor that delayed adoption of a tech is the story of Google Glass in 2013.  Privacy concerns driven by the possibility of videos taken without consent led to users being called "Glassholes."  Coupled with other performance issues, this led to the discontinuation of the original product (though it lives on in Google's attempts to enter the augmented reality market).  
    • Likewise, these social or cultural issues can positively impact tech trends as well.  For example, we have all had to become experts at virtual communication almost overnight due to the COVID crisis--whether we wanted to or not.
  • R=Regulatory/Legal issues with the tech.  The best example I can think of here is electromagnetic spectrum management.  Certain parts of the electromagnetic spectrum have been allocated to certain uses.  If your tech can only work in a part of the spectrum owned by someone else, you're out of luck.  Some of this "regulation" is not government sponsored either.  The Institute of Electrical and Electronics Engineers establishes common standards for most devices in the world, for example.  For example, your wifi router can connect to any wifi enabled devices because they all use the IEEE's 802.11 standard for wifi.  Other regulations come from the Federal Communications Commission and the International Telecommunications Union.
  • T=The tech itself.  This is where most people spend most of their time when they study tech trends.  It IS important to understand the strengths and weaknesses of a particular technology, but as discussed above, it might not be as important as other environmental factors in the eventual adoption (or non-adoption...) of a tech.  That said, there are a couple of good sources of info that can allow you to quickly triangulate on the strengths and weaknesses of a particular tech:
    • Wikipedia.  Articles are typically written from a neutral point of view and often contain numerous links to other, more authoritative sources.  It is not a bad place to start your research on a tech.  
    • Another good place is Gartner, particularly the Gartner Hype Cycle.  I'll let you read the article at the link but "Gartner Hype Cycle 'insert name of tech here'" is almost always a useful search string  (Here's what you get for AI for example...).  
    • Likewise, you should keep your eye out for articles about "grand challenges" in a particular tech (Here is one about grand challenges in robotics as an example).  Grand Challenges outline the 5-15 big things the community of interest surrounding the tech have to figure out to take the next steps forward.  
    • Likewise, keep your eyes out for "roadmaps."  These can be either informal or formal (like this one from NASA on Robotics and autonomous systems).  The roadmaps and the lists of grand challenges should have some overlap, but they are often presented in slightly different ways.
Obviously, the BPRT Heuristic is not the answer to all your tech trend questions.  In providing a quick, holistic approach to tech trend analysis it does, however, allow you to avoid many of the problems associated with too much hype.