This conversation is part of our AI Revolution series, which features some of the most impactful builders in the field of AI discussing and debating where we are, where we’re going, and the big open questions in AI. Find more content from our AI Revolution series on www.a16z.com/AIRevolution.
Cofounder and CEO of Anthropic Dario Amodei unpacks how far can scaling laws take us and how AI can be used to improve AI.
Anj: I’m going to take you all back in time to about 3 years ago. You and Tom, one of your cofounders, gave me a call, and said, “Hey, we think we’re going to start Anthropic.” I asked you, “What do you think we need to get going?” You said, “Well, I think we can get by with like $500.” I said, “I think we can find $500K somewhere.” I remember you, deadpan, saying, “Dude, I’m talking about $500M.” That’s when I realized things were going to be a little bit different.
Most people here know you as the founder of Anthropic. I think it would be helpful to hear how you got there.
Dario: I started in a very different field. I was initially an undergrad in physics. I just wanted to understand the universe. AI wasn’t even on my radar. It seemed like science fiction. Near the end of my undergrad, I started looking very carefully at Moore’s Law. I read the works of Ray Kurzweil and felt like there was something there and that AI was really going to go somewhere. But I didn’t really know how to work on it.
The big thing in those days was support vector machines. It wasn’t anything that seemed very exciting. So, I decided to go to grad school in neuroscience because that was the closest thing to an intelligence that I could actually study. Near the end of grad school, I started to see all this stuff about AlexNet and Quark. Back then it was actually starting to work, so I ended up joining Andrew Ng’s group at Baidu that worked on speech recognition. Then I was at Google Brain for a year and then I was one of the first folks to join OpenAI. I was there for about 5 years, and that takes us to the founding of Anthropic.
Anj: What was it at that moment—when you and the team at OpenAI had started publishing your first experiments towards the end of that 5-year period—you just talked about around scaling laws that gave you so much confidence that this was going to hold when everybody else thought that was crazy talk?
Dario: For me, the moment was actually GPT-2 in 2019. There were 2 different perspectives on it. When we put out GPT-2, some of the stuff that was considered most impressive at the time was, “Oh, my God. You give these 5 examples of English to French translation. Just offer it straight into the language model. Then you put in a sixth sentence in English and it actually translates into French. It actually understands the pattern.” That was crazy to us, even though the translation was terrible. It was almost worse than if you were to just take a dictionary and substitute word for word.
Our view is that this is the beginning of something amazing because there’s no limit and you can continue to scale it up. There’s no reason why the patterns we’ve seen before won’t continue to hold. The objective of predicting the next word is so rich and there’s so much you can push against that it just absolutely has to work. Some people looked at it and were like, “You made a bot that translates really badly.” It was 2 very different perspectives on the same thing. We just really, really believed in the first perspective.
Anj: Famously, what happened then was you saw a reason to continue down that line of inquiry, which resulted in GPT-3. What do you think was the most dramatic difference between GPT-3 and the previous efforts?
Dario: I think it was much larger and scaled up to a substantial extent. The thing that really surprised me was the Python programming, the conventional wisdom, these models couldn’t reason at all. When I saw the Python programming, even though it was very simple, even though a lot of it was stuff you could memorize, you could put it in new situations and come up with something that isn’t going to be anywhere in GitHub. It was just showing the beginning of being able to do it. I felt that that ultimately meant that we could keep scaling the models and they would get very good at reasoning.
Anj: What was the moment at which you realized, “It’s sort of working to a prototypical level with reasoning, but we think, with the Python program, this is actually going to generalize much broader than we expect.” What were some of the signals there that gave you that conviction?
Dario: I think one of the signals was that we hadn’t actually done any work. We just scraped the web and there was enough Python data in the web to get these good results. When we looked through it, maybe 0.1% to 1% of the data that we scraped was Python data. So, the conclusion was, “Well, if it does so well with so little of our data and so little effort to curate it on our part, it must be that we can enormously amplify this.” That just made me think, “We’re getting more compute, we can scale up the models more, and we can greatly increase the amount of data that is programming.” We have so many ways that we can amplify this, so of course it’s going to work. It’s just a matter of time.
Anj: You and the team acted very strongly on that impulse to pursue scaling laws. We fast forward 2 years and it’s hard to fathom the sheer amount of progress that’s happened in 24 months. When we start looking out to the next 24 to 36 months, what do you think the biggest bottlenecks are in demonstrating that the scaling laws continue holding?
Dario: I think there’s 3 elements. There’s data, there’s compute, and there’s algorithmic improvements. I think we are on track. Even if there were no algorithmic improvements from here, even if we just scaled up what we had so far, I think the scaling laws are going to continue. I think that’s going to lead to amazing improvements that everyone, including me, is prone to underestimate. The biggest factor is simply more money is being poured into it. By all accounts—I won’t give exact numbers—the most expensive models made today cost about $100M, plus or minus a factor of 2. I think that next year we’re probably going to see, from multiple players, models on the order of $1B. And in 2025, we’re going to see models on the order of several billion, perhaps even $10B. That factor of 100 plus the compute inherently getting faster with the H100s, that’s been a particularly big jump because of the move to lower precision. If you put all those things together, and if the scaling laws continue, there’s going to be a huge increase in capabilities.
Anj: You’ve pointed out consistently that if we just scale our current architectures, we get there. What do you think will end up unlocking performance while allowing for these models to be more efficient from an architectural perspective? Do you think we need a fundamentally new approach?
Dario: My basic view is that inference will not get that much more expensive. The basic logic of the scaling laws is that if you increase compute by a factor of n, you need to increase data by a factor of square root of n and size the model by a factor of square root of n. That square root basically means the model itself does not get that much bigger and the hardware is getting faster while you’re doing it. I think these things are going to continue to be servable for the next 3 or 4 years. If there’s no architectural innovation, they’ll get a little bit more expensive. If there’s architectural innovation, which I expect there to be, they’ll get somewhat cheaper.
Anj: What is the skill set and the talent required to unlock those architectural innovations? For a long time, this was not well understood, but you had this physicist-only leaning. What is it about the training of physicists that made you so convinced? The early 4 out of the first 7 cofounders had physics backgrounds, not traditional AI or machine learning backgrounds.
Dario: There are 2 kinds of fields at any given point in time. There are fields where an enormous edifice of experience and accumulated knowledge has been built up and you need many years to become an expert in that field. The canonical example of that would be biology. It’s very hard to contribute groundbreaking or Nobel Prize work in biology if you’ve only been a biologist for 6 months.
Then there are fields that are very young or that are moving very fast. AI was, and still is to some extent, very young and is definitely moving very fast. When that’s the case, really talented generalists can often outperform those who have been in the field for a long time because things are being shaken up so much. If anything, having a lot of prior knowledge can be a disadvantage. Because several of our cofounders were physicists, we thought that was at least one pool of people where there’s a lot of raw talent without necessarily experience in the field. That’s generally borne out, we’ve hired a number of them, and we have statistics on it. It works.
Anj: I remember in the early days of building the company there was this very strong belief you had that, if we got enough physicists and a few infrastructure engineers in the same room, we could scale the quality of the output much, much faster than teams that might be bigger and better resourced. Fast forward to now where you’re starting to become a fairly well-resourced company and a team. What is the hardest part of your talent pool to maintain as you start scaling beyond that 100-person full-time employee mark?
Dario: As companies get larger, everything gets harder. Our general view is that talent density beats talent mass every time. On the commercial side, maybe less so on the research side, you just need to do things. You have a list of customers. You need one person to serve this customer and one person to serve that customer. You have a list of features and you need one person to implement this feature and one person to implement that feature. Those numbers add up. The challenge is to maintain that very high level of talent density as you scale. We’ve done very, very well at that so far. We always have debates among the leadership team. “Oh, my God, we’re growing too fast. We can’t possibly maintain the talent bar.” We’ve always managed to do it in the past, but it’s a constant tension.
Anj: It would be helpful to take a few minutes to hear your belief about constitutional AI, which is the regime you proposed earlier this year. Then we can talk a little bit about the implications of what that means for the safety and future of these models.
Dario: The method that’s been dominant for steering the values and the outputs of AI systems, up until recently, has been RL from human feedback. I was one of the coinventors of that at OpenAI. Since then, it’s been improved to power ChatGPT. The way that method works is that humans give feedback on model outputs, such as which model outputs they like better. Over time, the model learns what the humans want and learns to emulate that. Constitutional AI, you can think of it as the AI itself giving the feedback. Instead of human raters, you have a set of principles, and our set of principles is in our constitution. It’s very short; 5 pages. We’re constantly updating it. There could be different constitutions for different use cases, but this is where we’re starting from. Whenever you train the model, you simply have the AI system read the constitution, look at some task, summarize this content, or give your opinion on X. It will say, “The AI system will complete the task.” Then you have another copy of the AI system say, “Was this in line with the constitution or was it not?” At the end of this, if you train it, the hope is that the model acts in line with this guide star set of principles.
Anj: As a result of that approach, the seed of the constitution captures some set of values of the constitutional authors. How are you grappling with the debate that you are imposing your values on the constitutional system?
Dario: There are a couple of directions in that. First, when we took the original constitution, we tried to add as little of our own content as possible. We added things from the UN Declaration on Human Rights, generally agreed upon deliberative principles, and some principles from Apple’s terms of service. They’re very vanilla, like, “produce content that would be acceptable if shown to children.” Or, “don’t violate fundamental human rights.” From there, we’re going in 2 directions. One is that different use cases demand different operating principles and maybe even different values. A psychotherapist probably behaves in a very different way from a lawyer. The idea of having a very simple core and then specializing in different directions from there is a way not to have this “mono-constitution” that applies to everyone. Second, we’re looking into the idea of some kind of deliberative democratic process whereby people can design constitutions.
Anj: To folks who aren’t privy to what’s going on inside of Anthropic, you can often seem paradoxical because you’ve found a way to efficiently scale and keep the scaling laws proceeding. At the same time, you’re big advocates of making sure that this doesn’t happen very fast. What is the thinking behind that paradox?
Dario: One of the things that most drives the trade-offs is—and you see it a bit in constitutional AI—that the best solutions to a lot of the safety problems almost always involve AI itself. There’s a community of very theoretically-oriented people that tries to work on AI safety, separate from the development of AI. My assessment of this—I don’t know if others would say it was fair—is that it hasn’t been that successful. The things that have been successful, even though there’s much more to do, are areas where AI has helped us to make AI safe.
Now, why would that happen? As AI gets more powerful, it gets better at most cognitive tasks. One of the relevant cognitive tasks is judging the safety of AI systems, eventually doing safety research. There’s a self-referential component to it. We even see it with areas like interpretability, looking inside the neural nets. We thought at the beginning—we’ve had a team on that since the beginning—it would be very separate, but it converged in 2 ways.
One is that powerful AI systems can help us to interpret the neurons of weaker AI systems. Again, there’s that recursive process. Second, interpretability insights often tell us a bit about how models work. When they tell us how models work, they often suggest ways that those models could be better or more efficient. These things are very intertwined with each other. We’re still working on frameworks for this, both regulatory and following it ourselves, but one broad way that we’ve been thinking about it from the beginning and are probably going to work more on formalizing it over the coming months and years, is this idea of safe scaling or checkpointing.
There could be an alternating step where you advance the level in capability, and then there’s a gate where you have to show, “If you want to get to the next level, you have to show that your model has certain safe properties.”
Anj: What do you think is the biggest trade-off by taking the path of “let’s implement the gates?”
Dario: We need to be careful to not put in a bunch of red tape that isn’t necessary. If you have to fill out 1000 pages of paperwork and get 15 different licenses from different bodies to make an AI system, that’s never going to work. That’s going to slow things down. Other adversaries, authoritarian countries, will get ahead of us. I don’t think we can do that. If you look at things like airplane safety or auto safety, to the extent that regulations have ever done a good job of balancing allowing things to move forward with “people could die if we get this wrong,” I think those are examples of getting it at least relatively right.
Anj: We only have a couple of minutes left, so I’m going to shift. You had a very busy summer of Anthropic drops, a 100K context window, and you released Claude 2. You’re being pretty vocal about scaling the models and then exposing them to real-world interactions. You’re building a whole ecosystem. There’s a room full of founders here who are trying to understand whether they can build on top of the Anthropic platform. What’s the advice you’d offer folks because who are trying to figure out where Claude, Anthropic, and the roadmap go?
Dario: One thing that people are starting to realize, but is still underappreciated, is the longer context and things that come along with what we’re working on. Retrieval or search really open up the ability of the models to talk to very large databases. One thing we say is, “You can talk to a book, a legal document, or a financial statement.” People have this picture in their minds of, “There’s this chatbot. I ask it a question and it answers the question.” But the idea that you can upload a legal contract and say, “What are the 5 most unusual terms in this legal contract?” Or upload a financial statement and say, “Summarize the position of this company. What is surprising relative to what this analyst said 2 weeks ago?” All this knowledge manipulation and processing large bodies of data that take hours for people to read—I think much more is possible with that than what people are doing. We’re just at the beginning. That’s an area I’m particularly excited about because there are a lot of benefits and all the costs that we’ve talked about, for this generation of models, I’m not too worried.
Anj: How long until infinite context windows?
Dario: The main thing holding back infinite context windows is, as you make the context window longer and longer, the majority of the compute starts to be in the context window. At some point, it just becomes too expensive in terms of compute. We’ll never have literally infinite context windows, but we are interested in continuing to extend the context windows and to provide other means of interfacing with large amounts of data.
Anj: Well, we are out of time. Thank you, Dario.
Dario: Thank you for having me.
Dario Amodei is a cofounder and CEO of Anthropic, where he co-created the LLM Claude 2. He was previously at OpenAI, where he helped develop ChatGPT.
Anjney Midha is a general partner at CFI Corporation, where he invests in AI, infrastructure, and open source technology.