This conversation is part of our AI Revolution series, which features some of the most impactful builders in the field of AI discussing and debating where we are, where we’re going, and the big open questions in AI. Find more content from our AI Revolution series on www.a16z.com/AIRevolution.
To date, a handful of large companies have captured the value created by advances in AI. In this keynote presentation from AI Revolution, CFI general partner Martin Casado explains why with generative AI, that’s changing.
Martin: I think we should all get ready for a new wave of iconic companies. I don’t think we know what they’re going to look like, but the economics are just too compelling What has the narrative been for AI over the last 50 years? The narrative is this episodic thing with summers and winters and all of these false promises.
I remember when I joined Ph.D. in 2003. From my cohort, I would say 50% of the people were doing AI. This was when Bayesian stuff was super popular. Then within 3 years, everybody was like, “AI is dead.”
We’ve had this love-hate relationship with AI for a very long time, but if you look at all of the graphs, we’ve made a tremendous amount of progress for the last 70 years. Along the way, we’ve solved a lot of very real problems. Way back in the 60s we were doing expert systems, which are still used for diagnosis. We’re very good at beating Russians at chess. We’re doing self-driving cars, we’re doing vision. There’s a lot that we’ve solved, so much so it’s become a cliche. Every time we solve a problem, we’re like, “That wasn’t really AI.” We just keep moving the goalposts.
Not only that, it’s been a while now that we’ve been better at humans for some very important things. For example, perception or handwriting detection. It’s been about 10 years since we’ve been better than humans at entity identification. Not only that, we’ve actually gotten very good at monetizing this, particularly for large companies. As we all know, there’s been a ton of market cap that’s been added to companies like Meta and Google and Netflix by using AI.
I think the question we should all ask ourselves is why hasn’t this resulted in an actual platform shift? By platform shift, I mean, why has the value accrued to the incumbents and why hasn’t there been a new set of AI-native companies that have come up and displaced them? We saw that in mobile and with the internet and the microchip.
For the first part of this talk, I’m going to argue that the capabilities have all been there, but the economics just haven’t for startups.
If you step back and you look at the standalone case for AI economics—not what a big company can extract from it, but a startup—it’s actually not been great. A lot of the sexier use cases are pretty niche markets. It’s great to beat Russians at chess, but that’s not a market. Maybe it’s a useful tool that you can apply to solving bigger problems, but that is not, in itself, a market.
I actually think the second point is the most important and it’s pretty subtle. Many of the traditional AI use cases require correctness in the tail of the solution space, and that’s a very hard thing for a startup to do for a couple of reasons.
One of the reasons is if you have to be correct and you’ve got a very long and fat tail. Either you do all of the work technically or you hire people, so often, we hire people. For startups to start hiring people to provide solutions, there’s a variable cost. The second is because the tails of the solutions tend to be so long, think something like self-driving where there are so many exceptions that could possibly happen, the amount of investment to stay ahead increases and the value decreases. You have this perverse economy of scale.
We’ve actually done studies on this and it turns out many startups that try to do this end up with non-software margins. They have lower margins and they’re much harder to scale. With robotics comes the curse of hardware, classically a very difficult thing for startups to do.
If you think about the competition for most AI use cases, not traditional machine learning, but AI, it tends to be the human. Traditionally, the human brain is really good at perception. The brains that we have evolved over 100M years to do things like pick berries and evade lions—and they’re incredibly efficient at doing that. This leads to something that most investors know, which we call the dreaded AI mediocrity spiral.
What is it? It’s very simple. Let’s say a founder comes in and they want to build an AI company to automate a bunch of stuff. Of course, correctness is really important and they want it to look good first, so they hire people to do it instead of the AI. Then they come to us, we invest in them, and I join the board. Then I say, “Listen, this is great, you need to grow.” And they’re like, “Oh, man, we need to grow, this AI is hard, the tail is very long. I’m going to hire more people.”
Now you’re on this treadmill of continuing to invest and hiring people. Automation doesn’t often happen, and if it does, it’s only part of the solution. This is one of the reasons why so many startups that have tried to do this haven’t had breakaway economics. The value accrues to large companies that can actually seek these perverse economies of scale.
I think one of the great examples, just because it’s illustrative of so many of these things, is Robotaxi. Robotaxi is fantastic, but we’ve been working at this for decades. We’ve invested $75B and the unit economics are still not on par with a human Uber driver, so it’s remained in the domain of large companies.
If you actually look at the slide that I had previously, other than the niche markets, all of these problems apply. There’s hardware, there’s an incredibly long tail with Robotaxi. If you strip away everything and go to, “Here’s the processing unit that does self-driving today on the high-end systems,” with the amount of power that they consume and hardware that’s required, even then it’s only maybe a factor of 10x better than a human being–and the power is much, much higher.
The economics aren’t that compelling and they’re certainly not the type of economics you see in the case of market transformations. Market transformations aren’t created with economics that are 10 times better; they get created when they’re 10,000 times better.
What is the learning from the last 70 years? It’s not that the technology doesn’t work. It’s not that we can’t solve the problems. All of that has always been the case. It’s not even that we can’t monetize it. Big companies are great at monetizing it. It’s that it’s very, very hard for startups to break away. If startups can’t break away, you don’t get a transformation.
So why are we all here? This wave is very, very different. You have a model and that model will take in a natural language, like English, and will output something. It can be a conversation, an image—it can use whatever.
As we know, this has been pushed into many different areas where we’re already seeing productive viable businesses. I like to call them the 3Cs. There’s creativity, like any component of a video game, that you can automatically generate. There’s companionship, which is more of an emotional connection so you have characters to talk to, so it fills a social role. Then there’s a class that we call co-pilot, which is effectively that it will help you with work or personal tasks. These are already emerging as independent classes.
Remember the slide that I showed you before of the properties of AI that made it difficult to build a startup company? None of these really apply to this current model. Let’s go through them.
The first is large markets that this has been applied to, like arguably all of white-collar work. Even video games and movies are like a $300 billion market. These are massive, massive markets.
The second is the most important point and maybe the most subtle. In this domain, correctness isn’t as much of an issue for 2 reasons. One of them is when you’re talking about creativity, the first C, there is no formal notion of correctness. What does it mean to be incorrect for a fiction story or a video game? You want to make sure they have all their fingers, but even then, do you really in sci-fi? We have absolutely adapted to use cases where correctness is not a huge issue. The second is a little more subtle, which is that the behavior developed around these things is iterative. The human in the loop that used to be in a central company is now the user. It’s not a variable cost to the business anymore, the human in the loop has moved out. As a result, you can do things where correctness is important, like developing code because it’s iterative. The amount of error that accrues gets less because it’s a smaller subset, but you’re also constantly getting feedback and correction from the user.
The primary use cases that we talked about are clearly mostly software-based, at least for now. This does apply to robotics, but that’s not what’s really taking off.
I want to talk to you about this brain portion. I’m not a neuroscientist, but I think this is very interesting. For these types of tasks, the silicon stack is way better than the carbon stack. If you think about traditional AI, a lot of it is doing stuff like the 100M-year-old brain—the one that’s fleeing predators or picking strawberries. That’s very, very hard to compete with.
If you have the CPU-GPU set up as a self-driving car, some of these kits are 1.3 kilowatts, when the human brain is 15 watts, economically, that’s very tough to compete with. The new-gen AI wave is competing with the creative language center of the brain. It’s 50,000 years old and is much less evolved. It turns out it’s incredibly competitive, so much so that you actually have the economic inflection we look for for a market transformation.
Let’s break down the numbers. Let’s say that I, Martin, wanted to create an image of myself as a Pixar character. If I’m using one of these image models, let’s say the inference cost is 1/10th of a penny and it takes one second. If you compare that to hiring a graphic artist for 100 bucks in an hour—I’ve actually hired graphic artists to do things like this and it tends to be a lot more money than that, but let’s conservatively say that—you’ve got 4 to 5 orders of magnitude difference in cost and time. These are the type of inflections you look for as an economist when there’s actually going to be a massive market dislocation.
I’ll give you another example from Instabase. Let’s assume that you have a legal brief in a PDF. You throw it into an unstructured document LLM and then you ask questions for that legal brief. Again, the inference cost is 1/10th of a penny, more or less, and time to complete, maybe one second. But again, it’s cheap relative to the cost of a lawyer.
As someone who has actually spent a lot of money on lawyers, I want to point out a couple of things. The first is it takes more than one hour to iterate on this. The second is they’re not always correct. In fact, built-in for any interaction I have with a lawyer is cross-checking and double-checking the work. Again, we have 4 to 5 orders of magnitude difference in cost and time.
If you want an example of how extremely nutty that this can get, I see no reason why you can’t generate an entire game. There are companies today working on it, the 3D models, the characters, the voices, the music, the stories. If you compare the cost of hundreds of millions of dollars and years versus a few dollars of inferences, now we have current internet and microchip level asymmetries in economics.
Now, I’m not saying this is going to happen. I’m not saying this will happen soon—we’re not there yet. What I am saying is this is the path that we’re on and these types of paths are what you look for with big transformations.
It’s little wonder why we’re seeing so much takeoff the way that we have. These are the fastest-growing open-source projects and products and some of the fastest-growing companies we’ve seen in the history of the industry. It’s because, again, it’s less a capability, which we always focus on the capabilities, and it’s much more that the economics work. Whenever the marginal cost of something drops this much, the industry changes. By marginal cost, I mean for whatever good you’re producing, to produce more of that good, that price converges on $0.
This may sound hyperbolic, but I really think that we could be entering a third epoch of computers. I think that the first epoch is the microchip. Before the advent of the computer, you actually had people calculating logarithm tables by hand. That’s where the word comes from. They were computers, they would compute. Then we created ENIAC along with other machines.
Let’s look at ENIAC. ENIAC was 5,000 times faster than a human being doing it. There’s your 3 to 4 orders of magnitude and that ushered in the compute revolution. This gave us a number of companies that were either totally transformed, like IBM, or totally net new.
The microchip brought the marginal cost of compute to $0 and the internet brought the marginal cost of distribution to $0. In the 90s, when I wanted to get a new video game I would go to a store and buy a box. If you look at the full cycle from that box leaving whoever sells it, this is weeks. I don’t have the math up here, but if you actually calculate the price per bit relative to DSL in the late 90s, it’s about 4 or 5 orders of magnitude again relative to actually shipping it.
I think there’s a pretty good analog where you say these large models actually bring the marginal cost of creation to $0. That’s some very fuzzy, vague notion of what creation means. Like the previous epochs, you had no idea what new companies were going to be created. You just knew something was going to happen. Nobody predicted Amazon, nobody predicted Yahoo. I remember when this happened. I think we should all get ready for a new wave of iconic companies. I don’t think we know what they’re going to look like, but the economics are just too compelling.
I have one final point: There’s always questions when you have market dislocations. They’re staring you in the face. What happens to the jobs? What happens to people? Because this is an economic talk, I’m going to give an economic answer.
There’s something called Jevons Paradox. Very simply, Jevons Paradox is if the demand is elastic, even if you drop the price, the demand will more than make up for it. Normally, far more than make up for it. This is absolutely the case of the internet. You get more value and more productivity.
I personally believe when it comes to creating, any creative asset or any sort of work automation, the demand is elastic. I think the more that we make, the more people consume. We’re very much looking forward to a massive expansion in productivity, a lot of new jobs, a lot of new things. I think it’s going to follow just like the microchip and just like the internet.
With that, I welcome you all here today. I think there’s a lot of work for all of us to do, and thank you so much for taking the time.