Everyone is talking about ChatGPT. This has brought even more energy to the entire AI movement.
With so many commentators equating ChatGPT with AI, it is worth pointing out three things I think that people can get wrong about AI:
One, they assume AI means super-human intelligence.
Science fiction writers and movies have made us believe AI means super-human intelligence. And often, this intelligence is embodied in a life-like robot.
This was good marketing. “Artificial” doesn’t naturally conjure up this idea. Artificial sweeteners or artificial Christmas trees aren’t assumed to have superpowers.
But, with AI, it is different. As soon as a new advancement in the field of AI, many commentators immediately start to extrapolate to give it super-human capabilities. This allows them to claim that this will change the way we all live and work or bring an end to humanity.
This leads to the next point.
Two, they don’t realize AI refers to many different things.
There is no agreed-upon definition of AI. Even people who think deeply about it disagree. And, no matter what the definition should be, people use it to refer to different things.
The term AI became popular again after the 2012 breakthrough with image recognition. The algorithm that did this was a deep neural network. A neural network works the way we think the brain works. Hence the name AI once again caught the popular imagination, stuck, and took a life of its own.
So, at first and still to some, AI meant working with a deep neural network. Now, I hear people equate ChatGPT with AI.
But AI is more nuanced than that. Here are the different ways I see people use the term1:
AI means Artificial General Intelligence.
This area of basic research uses deep neural networks2 to build a machine with general intelligence, like a human.
I am skeptical, thinking we are still far from this goal. But, this research produces many advances that have helped in many other areas.
And it might be OK to put self-driving cars into this bucket. Although I suspect that if we get there, we’ll have an algorithm that drives a car, not something with general intelligence.
AI means Generative AI.
This category grew out of the AGI research. However, no one was talking about it this time last year.
First, DALL-E became popular. Then, ChatGPT was released in November 2022 and took off like nothing seen before.
Plus, I’ve heard chatter that many Crypto developers and builders are pivoting to Generative AI.
This topic couldn’t be hotter and more hyped. By entering a prompt, the algorithm can generate a picture, a story, an answer, or code. You can also continue to prompt to improve the output or simulate a conversation.
This technology has a lot of potential to have a big impact3.
However, it feels like a different category from AGI.
AI means Practical AI. (Or AI that is used by regular businesses and organizations.)
This definition is important because many businesses and organizations refer to something different than AGI or Generative AI when they talk about AI. Instead, they are referring to a wide range of algorithms to solve problems.
These algorithms include deep learning (and Generative AI, too, so the definitions overlap). But, the algorithms also include regular machine learning, optimization (integer programming), simulation, and other techniques.
I like to point this out because if you are working with a business and they talk about AI, they are talking about more than AGI or Generative AI. And knowing this gives you a lot more to work with to come up with innovative solutions.
Three, they forget lessons from economics.
When banks started rolling out ATMs about thirty to forty years ago, many predicted that bank employment would tank since tellers would be useless.
Thirty years later, there were more tellers and bank employees. What happened?
The ATMs changed the economies of scale. The employees didn’t give out as much cash, but automation allowed banks to open more branches to serve different needs. This led to an increase in employment.
The same thing is likely to happen in AI.
This idea is the theme of a book by economists: Prediction Machines (here is a link to a good podcast lecture by the authors4).
The book notes that the rise of AI means that predictions are getting much cheaper. They use the term “predictions” generically and claim many problems are predictions. For example, a self-driving truck is a series of predictions.
As predictions get cheaper, we’ll use them in more places. This also means that we’ll value complimentary goods more (like judgments that make use of predictions) and that it may change other aspects of the strategy.
Like in the ATM example, people will be asked to do different things, and the economies of scale may change.
If self-driving trucks happen, it will make long-haul transportation cheaper. As we do more long-haul transportation, its complement, last-mile delivery, may become much more valuable. This could lead to a big increase in the need for last-mile drivers.
The book challenges us to reframe problems as predictions and see how that could change how we view our organizations.
I’m bullish on Generative and Practical AI. It is exciting to see all the possibilities. However, I’m not sure we’ll ever have agreed-upon definitions. So I’ll keep doing my part to try to make it all a bit more clear.
At least for now, I think. I only follow this research from the sidelines.
Here is an article I wrote before ChatGPT was released. Now, it seems that 90% of the business podcasts I listen to discuss potential use cases.
The book and podcast have many more insights making it worth a listen—hat tip to Sara Hoormann for pointing out the podcast.
Nice article and good take on AI. You can't preach Prediction Machines book enough!
Mike. It is so refreshing to read your piece. AI means many things - it is not a single technology or solution. One form of AI does not address all use cases. Far from it. - Jon Myers