When people talk about AI, they often mean many different things.
It is impossible to define AI as just one thing. It is an umbrella term referring to many technologies.
Instead, I prefer to think about different frameworks for how the term is being used. These frameworks make it easier to understand different aspects of AI. It also makes it easier for leaders to understand how different technologies within AI can solve specific problems.
Here are five different frameworks that I’ve found helpful in understanding the space. I use these in my classes and executive education sessions.
Framework #1: My default framing.
People can mean four different things when they talk about AI.
AI means Artificial General Intelligence (AGI)
AI means solving complex problems with deep learning— like self-driving and protein folding.
AI means LLMs. This is the more recent one I had to add with the instant ChatGPT sensation.
AI means Practical AI. This is a collection of methods that used to call Predictive and Prescriptive AI in 2017.
For more details, see my this post.
Framework #2: Ganesh Ramakrishna’s General Use Cases of AI
In a webinar sponsored by Primary, a seed VC firm, Ganesh defined AI in terms of three general use cases. These are:
AI for the Physical. This is using deep learning for robotics and self-driving cars.
AI for Process Automation. This uses technology like LLMs (and other methods) to automate processes like booking loads, handling back-office tasks, or answering customer calls or emails.
AI for Decisions. This is the use of various algorithms to make a business decision. This is a lot like my definition of Practical AI above.
I like this because it suggests different approaches to different problems in an organization.
Framework #3: Warren Powell’s Seven Levels of AI
Warren Powell breaks down AI into seven different levels. His paper on this is worth a read.
In addition to the seven levels, he says there are two broad ways to define AI: one, making computers behave like humans, and two, making computers smarter than humans. He points out that for structured problems (like scheduling or routing trucks), computers are already (and have been for a long time) smarter than humans.
Here are his seven levels (the picture is from his paper).
Note that his top level, Level 7, is what I called AGI above. He might get some pushback from the broader AI community by placing LLMs as Level 4, but his paper makes a case for this. Of historical interest, in the early days of AGI research, people thought rules-based logic (Level 1) would lead us to AGI. No one believes this anymore, but it is still a useful technology.
I like his approach because the framework highlights the value of optimization and decision-making. He notes that this does not involve training with data but instead requires us to build a model of the physical system. He writes frequently about the importance of sequential decision problems (see here).
Framework #4: Marc Andreessen’s idea that AI is a new type of computer
This framework is a different way to think about Deep Learning (and LLMs). I first heard him explain it in an interview on the Tetragrammaton with Rick Rubin podcast (starting around the 2-hour, 8-minute mark).
The idea is this: in the early days of creating computers, there were two basic potential architectures. The first is that the computer is a calculating machine. The second is that the computer is a neural network (like our brains).
At the time, engineers didn’t know how to create an effective neural network, so they went with a computer as a computing machine. This influenced us to think about computers as ultra-precise machines with no creativity. To understand how “ultra-precise” we want computers to be, remember the 1994 bug in the Intel chip that caused some obscure math mistakes? The fact that computers didn’t answer exactly correctly was a huge national news story.
Engineers have now figured out how to make neural networks work (ChatGPT). Andreesen argues that we shouldn’t think of this as just another application running on our existing computers but as a new type of computer. These neural network computers aren’t supposed to be precise, but they are creative. In Warren Powell’s language, they are starting to behave more like humans.
This framework shows us that we should use these two types of computers in different ways. They don’t compete with each other, just like your engineers don’t compete with your sales team; they do different things.
Framework #5: Yuri Balasanov’s 2 x 2 x 2 matrix of AI tools
I teach with Yuri. When we talked about frameworks, he drew the following 2 x 2 x 2 graph on the board and said he preferred to think about the AI methods along the following dimensions:
The Y-axis is data vs. theory. For example, when we build a linear program optimization model, it is based on theory (the physics of the problem). An LLM is trained with data to find the patterns.
The X-axis is correlation vs causation. Does the model return results that correlate with its training data (like an LLM does) or tell you why the answer came out as it did (like an optimization model)?
The Z-axis is deterministic vs probabilistic. A classic linear program would be deterministic. Warren Powell’s sequential decision problems would be probabilistic because the decisions I make now depend on the likelihood of different events in the future.
This framework gives you insight into how technologies differ from each other. I’ll have to work with Yuri to create a detailed blog post on just this.
Cool
Great!