Five AI Lessons from Lex Fridman Interview of Elon Musk
Here were my five big take-aways on their discussion of self-driving cars and the AI team behind it:
One, should I re-think my skepticism on self-driving cars?
In the interview, Musk was claiming they were a year away. The few Tesla owners have told me that the self-driving feature is good. On the other hand, people deep in the field like Rodney Brooks thinks we are still far away from fully self-driving cars. I’ve heard others who would line up with Brooks.
I am more inclined to follow Brooks, but this interview made me doubt that position.
Two, big AI projects need big teams.
Musk didn’t give specifics, but hinted at a large team with deep learning experts, sensor experts, data engineers, and computer scientists. At one point, Fridman was trying to mention the leader of the team and Musk pushed back to emphasize that the team is very big.
If I had to bet, I would say that this is one of the bigger AI engineering teams out there. This is another data point that shows that AI projects require large teams.
Most companies aren’t building self-driving cars, but to accomplish something big in your company, you may need more than just a few data scientists.
Three, speed matters.
Tesla is writing all their code in C to maximize speed. They even wrote their own C compiler to gain additional speed. (That also tells you something about the size of the team.)
On a much smaller scale, I saw something similar. When I was at LogicTools in 1998, we had built our own custom solver to solve supply chain optimization problems. We did this because we could easily beat the leading commercial solver, CPLEX. And, speed mattered to our customers.
But, CPLEX was rapidly improving their general solver every year. And, the compounding impact of those improvements meant that they quickly surpassed us. We soon ditched our custom solver and replaced our engine with CPLEX.
I’m guessing that Tesla will keep an eye on third party components and use them when they surpass what Tesla can build.
In emerging fields the make versus buy decisions can change quickly.
Four, variability matters.
Variability can be a problem in so many systems (like factories and supply chains).
In Tesla’s case, all the sensors have to pull together what they are “seeing” into one view so the car can decide what to do. The variability in the time to do this causes many problems.
I’m guessing this can mean that there is a trade-off in algorithms between accuracy and variability. That is, overall system performance may be better with a slightly less accurate algorithm that has less variability.
Most of us won’t be building such detailed systems, but we all have to deal with similar trade-offs.
Five, “software is eating the world” and deep learning may be eating software.
Marc Andreesen famously wrote that software is eating the world— that is, software is replacing much of the physical world. For example, software driving a car.
Now, deep learning neural networks may be eating software. Musk mentioned that as the deep learning models get better, they are replacing huge chunks of the self-driving code.
Here is what I’m guessing is happening: Tesla is treating this as an engineering problem, not a science project. So, if deep learning algorithms aren’t good enough, they were writing massive amounts of code to deal with all the rules needed to drive a car. As the deep learning algorithms work better, they swap in a deep learning algorithm for that part of the code.
Deep learning eating software is not a new prediction. This is just a good example of it.
I enjoyed the full interview. They covered how money is really just information, how a self-sustaining colony on Mars needs re-usable ships, first principle thinking, and much more. (I skipped the meme section at the end because I just had audio.)