AI Doom vs Optimism Part 2: Two Types of Doom, Four Ideas for Each Type
Last month I posted an article listing the main people and ideas on each side of the doom versus optimism side of the AI debate.
Since then, I had to prepare a talk on AI for a general audience. For that audience, I needed an easier way to explain what they hear in pieces from the media.
Here’s my attempt.
First, it is important to distinguish the two types of doom. They often get mixed together when people are talking about AI fears. They are two separate things.
One type of doom is that AI will develop superintelligence and kill us all. This type of doom is worried about literal human extinction, like being hit by a giant asteroid or a global nuclear war.
The second type of doom is that AI might be disruptive and painful.
For the first type of doom (imminent extinction), I see four camps, with two in direct opposition. Here are the four camps with a person I pick as the leader.
Camp 1, Eliezer Yudkowsky: We will soon go extinct. It may be too late to stop it.
This camp firmly believes that extinction from AI is a very serious risk to humanity. The analogy I’ve heard is that continuing to develop AI is like the Neanderthals inviting the smarter Homo Sapiens into their camps. The smarter Homo Sapiens led to the extinction of the Neanderthals.
They see the same thing happening with AI. At some point, the AIs will develop superintelligence, have their own goals, and eliminate us.
People in this group are worried enough that they suggest that we apply real-world violence to stop it— that is, bombing data centers that train these models and presumably going to war with rouge countries with these data centers.
This is not just one person. There are smart people, like those that signed the letter to pause development, Elon Musk, Robin Wiblin, Geoffrey Hinton, and Yuval Noah Harari who agree with this viewpoint. Not all these people think the risk is as great as Yudkowksy or advocate violencce. But, even if they assign a 5% probability to something that might wipe out humanity, they are close to this camp.
Camp 2, Marc Andreessen: The direct opposition!
As far as I can tell, Andreesen is leading the opposition. In interviews, he claims that he was fed up with the extreme doomers (Camp 1) and worried when they advocated airstrikes and war.
His opening salvo was the article “Why AI Will Save the World.”
And he continues to come out punching in interviews. Since Camp 1 sounds scary, he comes out strong. He claims that machines don’t work that way, that some of the support for Camp 1 comes from companies that want regulatory protection, and the US better get there before China. But his big punch is that he says that the arguments from Camp 1 sound more like a cult or religion than scientific reasoning.
I lean towards this camp but check out the arguments yourself.
Camp 3, Tyler Cowen: Where is the doomsday model?
Cowen provides cover to Andreesen in Camp 2. He doesn’t go as far as Andreesen but says that he isn’t going to worry until the folks in Camp 1 produce a scientific model of how extinction will work.
For example, climate scientists have a model of global warming. People can debate whether the model is accurate or good. This allows a scientific conversation. We can’t even have a reasonable conversation about this without such a model for the extinction scenario.
Camp 4, Melanie Mitchell: What is the fuss all about? We aren’t even close to AGI.
After reading her book, Artificial Intelligence, I realized she represented a different camp. Her book makes a strong case that we are far from true human-like intelligence, and while making nice advances, we may never get there. In that case, this whole argument that we’ll go extinct because of AI is just a fun philosophical discussion best left to late-night college dorm rooms.
On her Twitter feed, she pointed out an article by Blake Richards (and others) that claims that the extinction discussion is dangerous because it distracts from other priorities.
This leads us to the second type of doom: AI might be disruptive and painful.
This one is not controversial because the real threat is not AI, but people using these tools. There can be a debate about how much disruption and pain we’ll see. The counterargument is that these same tools bring a tremendous amount of good, so we need to minimize the harm.
In some sense, this fear is related to technology in general, not AI specific.
Whole books are written about this, so others will frame this better. Here are four general categories of worry.
The first fear is that AI technologies will decrease privacy. In the hands of authoritarian governments, this could be bad news. For example, advances in facial recognition make this easier. There are some who claim that the US is already in another type of Cold War with China over this use of AI.
This book by Cathy O’Neil was written long before ChatGPT. Her points remain true. At the risk of being too simplistic, when we use data and algorithms to make decisions, especially decisions that impact individuals, we better have some guardrails. The data and algorithms will have biases, bugs, and may be hard to explain. Books and ideas like this have spawned numerous documents and regulations on AI and Data Ethics.
The newest category of fear from the ChatGPT (and the drawing applications like DALL-E and midjourney), is of deep fakes (like the picture of the Pope above). The latest advances in AI have made deep fakes much easier. This, for sure, will lead to more fraud and play a role in politics. Like the above two, this is a real issue that people will need to work on.
The final category of fear is the oldest when it comes to technology: jobs. I showed a picture of an ATM because when it came out, there was a fear that tellers would all lose jobs. ~30 years after their introduction, there were more teller jobs (although doing different things) than before because ATMs changed the economies of scale. I don’t think anyone knows how this latest round of AI technology will change the nature of jobs. Some industries and jobs will see a tremendous increase in productivity and growth, others will be disrupted, and the economy may or may not grow faster than expected.