In a previous article, I laid out the four ways I see people use the term AI.
To summarize, the first three are quite specific. When people say “AI,” they sometimes mean Artificial General Intelligence (the research area working to replicate general human intelligence), Deep Learning algorithms to solve complex problems, or Generative AI (like ChatGPT or the related text-to-image drawing).
However, I argued that we need a fourth category. Businesses, books, and the media are full of references to AI that include many algorithms and techniques not covered by the three definitions above.
In that article, I proposed that many people use the term AI to refer to what we used to call Predictive and Prescriptive Analytics just a few years ago.
I called this category Practical AI. Some call it Enterprise AI.
When you use AI like this, it is a big tent. There are many algorithms and techniques included in this definition. This shouldn’t be a surprise; predictive and prescriptive analytics were the same.
However, I got a lot of comments on the Practical AI section. It told me that there is a need to expand on what is in this big tent and what it all means. Here are three ideas on what is in the tent and two thoughts on what it means:
First, the machine learning algorithms that became popular in the early 2010s would be in this tent. This would include logistic regression, k-nearest neighbor, k-means, decision trees, random forests, and many more you could look up with a Google search. Deep learning and large language models fit into this category (I know this means there is some overlap with the other definitions of AI).
Second, what most often gets left out of AI discussions in the business media is optimization and, more generally, the field of Operations Research (OR). You are missing a big opportunity if you have an AI group that doesn’t include optimization and OR. For more information, Irv Lustig covered this in two podcasts, Optimization’s Essential Role in the AI Revolution and Integrating Optimization and Machine Learning.
Third, the tent has room for new and established approaches, too.
Peter Schmidt pointed out that diagnostic analytics should be included. I agree. We did great work at Opex Analytics in this area (we called it root cause analytics). You can use algorithms to help predict the reason for failures— this is a valuable way to leverage your historical data.
Jon Myers, the Executive Chairman & Founder of DataShapes, works with analyzing sensor and waveform data. This is an underserved area. In theory, this would be a good application for Deep Learning. In practice, many applications don’t have the needed data, the data is in remote places (space, deep sea, or areas of conflict), or there is a strong need for transparency and audibility (like in the defense area). This emerging area of innovative algorithms would fit under the tent.
The Davenport and Mittal book All in on AI included the established robotic process automation and rules engines (or expert systems). Before this summer, I wouldn’t have included these. But I’m okay with it now, leading to the next point.
The fourth thing to know helps pull it all together. Some people might complain that using AI like this is too broad. However, I’m good with Practical AI being a big tent. We need a term to define this area, and AI (or Practical AI) seems reasonable enough (at least until something better comes along). The tests in my mind are that everything listed above is an algorithm that helps analyze data and make decisions and that all these seem like they should be part of a company's single AI group (or tightly connected groups).
The fifth point is one that Ken Fordyce frequently mentions on LinkedIn: When you solve real-world problems, you often mix and match many of the above algorithms to develop a good solution. There is no single algorithm that will solve your problems.
This is why Practical AI should be a big tent. Your AI leaders should understand all these algorithms and techniques, their strengths and weaknesses, and how to pull together teams with the right expertise. Your AI leaders will add the most value if they know about all these areas.
I think Ken would agree that your Practical AI teams also need more traditional skills like data engineers, software developers, business translators, and product managers.
Well done article. Observe Irv uses a broad definition of optimization which is important. My experience over 50 years is operations research starting point are the decisions the business or organization needs to make the structure of the network (Karl Kemp term) that bounds the decision. It then applies a search process to investigate alternatives. The "model" becomes the focal point to get various groups to collaborate. If you don't understand the kempf-sullivan decision grid, the discussion becomes a free for all.
Mike, I find this topic fascinating because it is so important to be able to find a good consistent framework and set of definitions to communicate this back up to the Leaders that be, that many times, aren't close enough to it to understand it. I have seen the traditional three, descriptive, predictive, prescriptive analytics get adjusted to include diagnostic analytics. I have recently seen the addition of cognitive analytics that looks to cover image recognition. I see that you are coming at this from the often abused and wide net of a term AI. I wonder if a separation is research vs solving business problems. I believe the former would include Artificial General Intelligence while the latter would include Generative AI and Practical/Enterprise AI. It is under that last one that would be subdivided (optimization - you know you love that I put that 1st!, forecasting, prediction, NLP, image recognition, conversational ai, generative ai). All good stuff! Hope to run into you soon!