POV: AI as Plastic: Useful, Cheap, Fragile
POV: AI as Plastic: Useful, Cheap, Fragile
Voice of Kevin Gold, Associate Professor of the Practice, BU Computing & 数据科学
The sensational debut of ChatGPT last year has caused widespread speculation about how artificial intelligence (AI) will impact our lives in the years to come. The system has the ability to flexibly answer a wide range of queries and commands. It can, for example, explain the major differences between sodium and lithium in verse while imitating a pirate (The reply I got to this prompt starts, "Avast ye, matey! Gather 'round and listen keen, /As I spin ye a tale of metals, lithium and sodium, unseen."). Emerging at a time when the vast majority of AI research still focused on bringing machine learning (ML) to bear on smaller, more focused problems, ChatGPT resurrected the idea of a single intelligent AI that could become as smart as a person or smarter. With that possibility arose the old specter from science fiction: what if we reach a point where the AI gets out of hand, and somehow destroys humanity?

I am writing to offer a different story that is not quite so apocalyptic. Yes, AI is going to be pervasive and will change the ways we live our lives in ways big and small. But the analogy I want to offer is that of ... plastic.
Plastic has certainly changed our lives in ways big and small; mostly has changed our lives for the better, but a little for the worse; and is overall so flexible that it's hard to have a sweeping stance against it in principle. It is dangerous only when used for bad purposes, or trusted to bear more weight or heat than it was designed for, or used without a plan to protect the environment (training ChatGPT is energy intensive). It won't take over the world in a nefarious sense, but it definitely puts aesthetics in the back seat to utility, so some people will find a world filled with it to be distasteful or annoying. Generations that grew up with it, take it for granted.
"Yes, AI is going to be pervasive and will change the ways we live our lives in ways big and small. But the analogy I want to offer is that of ... plastic."
If you haven't played around with ChatGPT yourself, you can do so at https://chat.openai.com/ (there's also the premium GPT-4, but assume all my observations apply to it as well.) Its versatility is impressive. It can plan a weekly menu for a family with dietary restrictions. It can make up a story about particular subject matter, and alter the story with commands to be funnier or introduce more new characters. It can explain technical concepts using words that some target audience will be more likely to understand, like, "explain quantum mechanics at a level appropriate for a fifth grader." It can occupy a similar niche to Google, answering queries such as "What is the traditional gift for a fifth anniversary?" but answer directly instead of merely supplying links (when I tested this question, it thought I still wanted a pirate poem. "Ahoy there! Ye be inquiring about the gift, For a grand occasion, the fifth anniversary, a special lift.....").
Of course, experience with ChatGPT also makes one familiar with its faults. It sometimes suggests recipes that are obviously flawed. One New York Times article details the disastrous results of trying to make a ChatGPT Thanksgiving. The stories it generates have common flaws such as failing to provide adequate description, failing to name characters, and relying heavily on stereotypes and cliched tropes. It does not actually have the up-to-the-minute information of Google; retraining with more current information is expensive, so ChatGPT just uses information up to 2021. A lawyer made national news by citing court cases that ChatGPT actually made up. The tendency toward "hallucination" in ChatGPT and its successor GPT-4 should make anyone wary of the supposed facts it cites.
"My claim is that these faults are not dealbreakers in the widespread adoption of ChatGPT and similar AIs, but that, as with plastic, there is a place for the cheap and breakable, so long as it is also easy to use and flexible."
My claim is that these faults are not dealbreakers in the widespread adoption of ChatGPT and similar AIs, but that, as with plastic, there is a place for the cheap and breakable, so long as it is also easy to use and flexible. The humble plastic fork is typically easy to break compared to a metal fork; yet it's used on many occasions where metal cutlery would just be prohibitively expensive to supply for everyone. Similarly, ChatGPT can perform some of the functions of an administrative assistant for a small business owner who otherwise would never have one.
"Our new restaurant is opening in Watertown tomorrow - write me a speech that especially thanks Smith Construction Co and advertises our shrimp scampi."
"What are the best conferences to attend in the Northeast for reaching construction companies that use cement mixers?"
("Ahoy there, my friend! If ye be lookin' to set sail and reach construction companies..." -- whoops, I left on the pirate talk. The five it recommends are World of Concrete Northeast, Construct New England, The Northeast Cementitious Materials & Concrete Conference, Northeast Regional Construction Expo, and Construction Institute Summit. World of Concrete exists, but is in Las Vegas. As far as I can tell, Construct New England does not exist. The Cementitious Materials conference appears to be in France in 2024. Etc....)
In these examples, the output is simultaneously useful and deeply flawed. Thanking Smith Construction Company can't be done right without the names of individuals in that company, which ChatGPT has no hope of supplying on its own; the way its (omitted) answer goes on and on about the shrimp scampi comes across as particularly gauche when it fails to recognize any individual people who helped. For the second query, the conferences returned are not all real and mostly in the wrong place this year. But each response is a starting point - the speech writer can ask for a revision that particularly calls out Foreman Mark, and the conference goer might ask in the next query how to get the cheapest flight to Vegas.
Not to Focus on the Flaws
来
Rush to Mediocrity
I predict, though, that for many applications, people will go with the cheap and expedient over "doing things right." Some people in our restaurant owner's situation would just read the first shrimp-scampi-themed speech that ChatGPT spat out. Someone trying to make a custom get-well-soon card may well accept ChatGPT's near-rhyme for "broken foot." (ChatGPT top choice: "Lookin' put."). Students all over the country are submitting low-B essays churned out by ChatGPT on a variety of subjects. As a rule, people who are bad at a skill will also be bad at assessing it in AIs. Plenty of people will look at its output and just say, "That sure is an essay comparing Pride and Prejudice to Lord of the Rings, all right." If the output suffices, it suffices, as much as it might pain a professional greeting card writer to see where this is headed.
So if people are going to rush to mediocrity, how do I know that an AI won't be put in charge of something vital, and thereby destroy humanity with its blunders? Well, I can take comfort in the fact that people generally don't try to build car motors out of meltable plastic, to continue the analogy. As engineers gain experience with a material, they recognize when it's not completely reliable. True, laypeople could conceivably take an AI's word as gospel and make important decisions that follow its advice; but it has always been an issue that human beings can have fairly dumb reasons for what they do, from bad Tarot card readings to the bad advice of paranoid uncle Fred. Important systems like power grids and nuclear weapons will typically have some safeguards already in place that deal with the fickle nature of human decision-making. There's no reason these can't be made relatively safe from AIs as well.
There was a recent letter signed by many AI professionals that called for a halt to AI development; and many famous AI experts, such as Geoff Hinton and Yoshua Bengio, have begun to say that we've perhaps come too far, too fast. I admit, I don't have the credentials of these luminaries. I haven't won a Turing Award like Hinton and Bengio. But I can state that I just don't see how we get from here to AI Doomsday. ChatGPT essentially averages over all its inputs coming from the web to try to be able to reconstruct what it sees, so there's no particular reason to think it will start to somehow do better than average in all things. That's not what it's trained to do - it's trained to do things like what it saw on the web and in books, no more and no less (faster, sure, but not better).
所以
Acknowledge the Risks
I'm not saying that other AI systems are without risk. Self-driving cars, deepfakes, and lethal autonomous weapons all pose their own unique risks to society if regulation can't keep up with the advances. Each of these is an AI technology that is only tangentially related to the advances present in ChatGPT, and so I assume they are not primarily what the current uproar is about; each has existed for years now, but it was only with ChatGPT that many professionals sounded the alarm of AI moving too fast. In the case of self-driving cars and lethal autonomous weapons, in fact, I'll note that their deployment so far, or lack thereof, has shown remarkable restraint - the relevant car makers and governments have recognized that these are not yet ready for prime time. I suspect we will still see these technologies deployed too soon and accidents will happen, but society has so far proven to be not totally stupid and reckless.
"AI is flexible and can fit into many niches, and basically none of them will end the world."
Wh
The AI, Paw Patrol Connection
“T
Just, maybe think twice if you think ChatGPT is telling you to launch nuclear missiles - or, more likely, launch questionable recipes at your dinner guests.
About the author: Kevin Gold is an Associate Professor of the Practice for the Faculty of Computing and 数据科学 at Boston University, where he teaches artificial intelligence and introductory data science. He is a recipient of a Best Paper award from the International Conference on Development and Learning and has published in various AI venues, including the AAAI conference and the journal AIJ. He received his Ph.D. in Computer Science from Yale in 2008, and a bachelor’s degree in computer science from Harvard in 2001.