The AI community has strong opinions on whether advances in AI should be feared or viewed optimistically. Here are some links with good thinkers on each side, with my opinion at the end. (OK, it doesn’t divide in two; there is a lot of nuance.)
The “AI Doom” side:
This famous open letter, signed by many top researchers and business leaders, called for a pause in Giant AI Experiments. This letter set off alarm bells and drew significant media attention.
The CEO of OpenAI (the organization that created ChatGPT) testified to Congress about the potential danger of AI. This also caused people to worry— if the CEO of OpenAI is worried, maybe we all should.
One of the leading voices saying that AI could lead to our destruction is Eliezer Yudkowsky. He wrote this article for Time stating that pausing was not good enough. You can find him on Twitter and podcasts.
Robin Wiblin, the host of the 80,000 Hours Podcast and a leading thinker in the Effective Altruism movement, tweeted, “AI extinction fears have largely won the public debate.” He’s convinced there is a chance of extinction, and much of his recent work is about preventing that.
It was viewed as quite ominous when Geoffrey Hinton, a godfather of AI, left Google. Closer to my immediate circle, an Operations Management professor wrote an article, “Generative AI is not entertainment — it is already a threat to our way of life.”
It is easy to find articles in the media that try to convince you to be afraid of AI.
However, finding skeptical voices is harder. Optimism must get fewer clicks. Here are the best optimistic sources I know about:
Marc Andreessen’s recent essay, Why AI Will Save the World, directly answers everything above. This essay is worth a read. He lays out a strong case for being optimistic while being realistic. For example, he acknowledges that AI, like every other tool, will be used by bad players. It is worth noting that Andrew Ng also seems to believe that AI has the potential to be more beneficial than harmful.
In an EconTalk interview hosted by Russ Roberts, Tyler Cowen was willing to consider the risks but thought those pushing the danger hadn’t presented a model of how things could go wrong. The model would show how things could go wrong and lead to a better scientific discussion on whether it was valid.
Maciej Ceglowski gave talk called Superintelligence: The Idea That Eats Smart People (video version, text version). He makes lots of good points. A funny one is that he said that his roommate was the smartest person he knew, and all he did was play video games between “bong rips.” Maybe a super-intelligent AI would do the same!
In a podcast and article, Murray Shanahan talked about the fact that Large Language Models don’t know things the way we know things— they know things statistically. And he speculates that there is something significant about general intelligence that requires embodiment.
I would also include researchers who are skeptical of AGI in this list. Since they are skeptical of how close we are to AGI, they are skeptical that it poses a special danger. This list would include Melanie Mitchell, Yann LeCun (of NYU and Meta), and Rodney Brooks (of MIT). Closer to my field, Warren Powell also seems skeptical.
So, where do I fall?
I’m firmly in the optimistic camp.
Why?
The skeptical thinkers seem to have stronger arguments. We seem to be far from anything close to general intelligence.
For sure, the new tools are powerful and will have a disruptive impact. But I’m optimistic that the advances will far outweigh the downsides (like deep fakes).
Also, it feels like we had some of the same worries around 2016 at the peak of self-driving excitement. Now the emerging self-driving technology seems more like a tool than AGI. I think it is likely that ChatGPT (and other LLMs) will feel just like a tool in a few years.
I’ll end with a quote from the Tyler Cowen podcast:
“It will be a fascinating future. Very weird in many ways…but we should all be ready for it.”
And there's no stopping the train. The only way out is through!