This transcript has been condensed and edited for readability and clarity.
David Haber: Marty, thank you so much for being here. We really appreciate it.
Marty Chavez: David, it’s a pleasure. Been looking forward to this.
David: Marty, you’ve had a fascinating career. Obviously, you played a really pivotal role in turning the Wall Street trading business into a software business especially during your time at Goldman Sachs and also now at Sixth Street. Maybe walk us through your career arc, and what is the through-line in those experiences?
Marty: Well, let me talk about a few of the things I did, and then the arc will become apparent. So, I grew up in Albuquerque, New Mexico. I had a moment, really, like the movie, “The Graduate,” when I was about 10, and my father put his arm around my shoulder and said, “Martin, computers are the future, and you will be really good at computers.” And this is 1974, and it was maybe not obvious to everybody. It was obvious to my father.
So, in New Mexico, you don’t have a ton of choices, especially at that time. It’s basically tourism and the military-industrial complex. And so, I went for the military-industrial complex. And my very first summer job when I was 16 was at the Air Force Weapons Lab in Albuquerque. The government had decided that blowing up bombs in the Nevada desert was really problematic in a lot of ways, and some scientists had this idea, crazy at the time, that we could simulate the explosion of bombs rather than actually detonating them. And they had one of the early Cray-1 supercomputers, and so for a little computer geek kid, this was an amazing opportunity.
And my very first job was working on these big Fortran programs that would use Monte Carlo simulation. So, I got an early baptism in that technique, and you would simulate individual Compton electrons being scattered out of a neutron bomb explosion and then calculate the electromagnetic pulse that arose from all that scattering. And my job was to convert this program from MKS units to electron rest mass units. And so, that certainly seemed more interesting to me than jobs in the tourism business, and so I did that. And then the next big moment was when I was a junior… Sorry. I went to Harvard as a kid, and I took sophomore standing.
So, you have to declare a major, a concentration right away if you take sophomore standing. And I didn’t know that. And I didn’t know what major I was gonna declare. It was gonna be some kind of science, for sure. And I went to the science center, and the science professors were recruiting for their departments. And I remember Steve Harrison sitting opposite a table saying, “What are you?” And I said, “I’m a computer scientist.” And I cannot believe he said this to me in 1981, but he said, “The future of the life sciences is computational.” And that was amazing, right, and so profound and so prescient. And I thought, “Wow. This must be true”. And he said, “We’ll construct a BioChem major just for you, and we’ll emphasize simulation. We’ll emphasize building digital twins of living systems.”
And so, I walked right into his lab, which was doing some of the early work on X-ray crystallography of protein capsids and working to set up the protein data bank. Even back then, he wanted to solve the protein folding problem. And I remember he said it might take 50 years, it might take 100 years, and we might never figure it out. And that’s obviously really important because that protein data bank was the raw data for AlphaFold, which later came in and solved the problem.
And so, the through arc is my entire career, I’ve been building digital twins of some financial or scientific or industrial reality. And the amazing thing about a digital twin is you can do all kinds of experiments, and you can ask all kinds of questions that would be dangerous or impossible to ask or perform in reality, and then you can change your actions based on the answers to those questions.
And so, for Wall Street, if you’ve got a high-fidelity model of your trading business, which was something that I, with many other people, worked on as part of a huge team that made SecDB happen, then you could take that model and you could ask all kinds of counterfactual or what/if questions. And as the CEO of Goldman Sachs, Lloyd Blankfein, who really commissioned and sponsored this work for decades, would say, “We are not predicting the future. We are excellent predictors of the present,” and I’ve been doing some variation of that ever since.
David: I know you ended up doing some graduate work in health care and in AI. How did you go from that into Wall Street?
Marty: I got so excited about these problems of building digital twins of biology that it seemed obvious to me that continuing that in grad school was the right thing to do. I actually wanted to go ahead and start making money, and I really owe it to my mom who convinced me that if I didn’t get a PhD, then I wasn’t gonna do it. I’m sure she was right about that. And so, I applied to Stanford. That was my dream school. And so, what happened is I was working on this program on artificial intelligence in medicine that had originated at Stanford under Ted Shortliffe, who was extremely well known, even back then, for building one of the first expert systems to diagnose blood-bacterial infections.
And so, I joined this program, and we and a bunch of my colleagues in the program took his work and thought, “Can we put this work, this expert system inference, in a formal Bayesian probabilistic framework?” And the answer is you can, but the downside is it’s computationally intractable. So, my PhD was finding fast randomized approximations to get provably, nearly correct answers in a shorter period of time.
So, this was amazing as a project to work on, but we realized pretty early on that the computers were way too slow to get anywhere close to the kinds of problems we wanted to solve. The actual problem of diagnosis in general internal medicine is you’ve got about 1,000 disease categories and about 10,000 various clinical laboratory findings or manifestations or symptoms. And this is a big problem.
And we made some inroads, but it was clear that the computers were just not fast enough. And we were all despondent, and this was one of the many early nuclear winters of AI. I walked right into it. I remember people stopped saying “artificial intelligence.” I was embarrassed, right? Like, this is not anything like artificial intelligence.
And a bunch of us were casting around looking for other things to do, and I didn’t feel too special as I got a letter in my box at the department. And the letter was from a headhunter that Goldman Sachs had engaged. And I remember the letter. I probably have it somewhere. It said, “I’ve been asked to make a list of entrepreneurs in Silicon Valley with PhDs in computer science from Stanford, and you are on my list.” And in 1993, before LinkedIn, you had to go do some digging to construct that list, and I thought, “I’m broke, and AI isn’t going anywhere anytime soon. And I have no idea what to do, and I have a bunch of college friends in New York, and I’ll scam this bank for a free trip.”
And that’s how I ended up at Goldman Sachs. And it didn’t seem auspicious. I just liked the idea. They were doing a project that seemed insane. The project was, we’re gonna build a distributed, transactionally-protected, object-oriented database that’s going to contain our foreign exchange trading business, which is inherently a global business, so we can’t trade out of Excel spreadsheets. And we need somebody to write a database from scratch in C. And fortunately, I had not taken the database classes at Harvard because if I had, I would have said, “That’s crazy. Why would you write a database from scratch?” And I don’t know anything about databases. And so, I just had the fortune to join as the fourth engineer in the three-person core SecDB design team.
And then in a very lucky move, one day, the boss came into my office and said, “The desk strategist for the commodities business has resigned. Congratulations. You are the new commodity strategist, and go out onto the trading desk and introduce yourself.” He was never going to introduce me to them, and we were kind of scared of them, to be honest. And so, there I was in the middle of the oil trading desk, kind of an odd place for a gay Hispanic computer geek to be in 1994 Wall Street.
David: Let’s fast-forward into the financial crisis. My understanding is that SecDB really helped the firm navigate that period. What was it about SecDB that was different than other Wall Street firms who lost billions and billions of dollars in that moment?
Marty: Yes. Well this is where we’re gonna start to get into the pop culture, right? Because, of course, you have to mention the big short when you start talking about these things, right? And so SecDB showed the legendary CFO of Goldman Sachs during the financial crisis, David Viner, that we and everybody else had a very large position in collateralized debt obligations, CDOs, that were rated AAA. And so in SecDB, it’s another thing, and it has a price, and that price can go up and down, and there’s simulations where it gets shocked according to a probability distribution. And then there’s nonparametric or scenario-based shocks.
And we looked at that and thought, “Wow. We better do something about this very large unhedged position,” namely, “Sell it down, or hedge it.” We didn’t know that the financial crisis was coming. Of course we got, in the press and elsewhere, accused of all kinds of crazy things. Like, “They were the only ones who hedged, so they must have known it was coming.” We were just predictors of the present and thought, “Better hedge this position,” hence the big short. And the question was, if Lehman fails, what happens then? And we talk about Lehman as if it is a single thing. We had risk on the books to 47 distinct Lehman entities with complex subsidiary, guaranteed, non-guaranteed, collateralized, non-collateralized relationships. And so, it was super complicated.
But in SecDB, it was all in there, and you could just flip it around. You could just as easily run the report from the counterpart side. Now, I make it sound like it was perfect. It was a little less than perfect. We had to write a lot of software that weekend, but the point is we had everything in one virtual place, and it was a matter of bringing it together. So, this is also part of the legend, but it’s also factual. We had our courier show up at Lehman’s headquarters within an hour of its filing bankruptcy protection for the 47 entities, and we had 47 sheets of paper with our closeout claim against each of those entities rolled up firmwide across all the businesses.
And it took many of the major institutions on Wall Street months to do this. And so that was the power of SecDB. And, of course, it was wildly imperfect, but it was something that nobody else had.
David: What impact do you think regulation has historically had on technology’s impact on financial services?
Marty: Regulation’s a powerful driver of change, and so is technological change, and some things are just inevitable. I’m a strong believer in capitalism with constraints and rules, and we’ll have a vigorous debate about the nature of the rules and the depth of the rules and who writes the rules and how they’re implemented, and all that matters hugely. But to say, “Oh, we don’t need any rules,” or, “Trust us, we’ll look after ourselves,” I just haven’t seen that work very well.
And so, in some cases, the regulators will say something. For instance, in the Dodd-Frank legislation, there’s a very short paragraph that says that the Federal Reserve shall supervise a simulation. And that will be part of the job of the Federal Reserve, a simulation of how banks will perform in a severely adverse scenario.
And that was a powerful concept, right? You have to simulate the cash flow, the balance sheet, the income statement several quarters forward in the future. None of this was specified in detail in the statute, but then the regulators came in and really ran with it and said, “You will simulate nine quarters in the future.” Nine quarters in the future, right? The whole bank? All of it? End-to-end?
And then in a very important move, the acting supervisor for regulation at the time, Dan Tarullo, the reserve governor, said, “We’re gonna link that simulation to capital actions,” whether you get to pay a dividend or whether you get to buy your shares back or whether you get to pay your people, right? Because he knew that that would get everybody’s attention. If it’s just a simulation, that’s one thing. But if you need to do it right before you can pay anybody, including your shareholders and your people, then you’re gonna put an awful lot of effort into it.
So, that caused a massive change and made the system massively safer and sounder. We saw that in the pandemic. There’s actually a powerful lesson for us in the early days of electronic trading for the early days of artificial intelligence, right? There was a huge effort by the regulators to say, “We’ve gotta understand what these algos are thinking because they could manipulate the market. They could spoof the market. They could crash the market.” And we would always argue, “You’re never gonna be able to figure out or understand what they are thinking.”
That’s a version of the halting problem. But at the boundary, between a computer doing some thinking and the real world, there’s some API, there’s some boundary. And at the boundary, just like in the old days of railroad control, at those junctions, you better make sure the two trains can’t get on a collision track, right? And it’s the junction where it’s gonna happen. But then when the trains are just running on the track, just leave them running on the track. Just make sure they’re on the right track. That’s gonna be an important principle for LLMs and AIs generally. As they start agenting and causing change in the world, we have to care a lot about those boundaries.
David: That’s a good transition to the present-day. Talk a little bit about generative AI specifically today. How is this technology different from the AI of your PhD in 1991?
Marty: Well, for full disclosure, I remember late ’80s, early ’90s, and this program at Stanford, we were the Bayesians, right? And then we would look at these connectionists to neural network people. And I hate to say it, but it’s true, we felt sorry for them. We thought “That’ll work. Simulate neurons? You gotta be kidding.” And, well, so they just kept stimulating those neurons, and look what happened.
Now, in some ways, there’s nothing new under the sun. I had a fantastic talk not so long ago with Joshua Bengio, who’s really one of the four or five luminaries in this renaissance of AI that’s delivering these incredible results. And he was talking about how his work is based on taking those old Bayesian decision networks and coupling them with neural networks where the neural networks design the Bayesian networks and vice versa.
And so, some of these ideas are coming back, but it is safe to say that the thread of research or the river of research that took this connectionist neural network approach is the one that’s bearing all the fruit right now. And, David, the way I would describe all of those algorithms, because they are just software, right, everything is Turing equivalent, right? But they’re very interesting software. They started off with images of cats on the internet.
People love putting up pictures of cats. Well, now you’ve got billions of images that people have labeled as saying, “This image contains a cat,” and you can assume all the other images don’t contain a cat. And you can train a network to see whether it’s a cat or not, and then all the versions of that. How old is this cat? Is this cat ill? What illness does it have? Right? All of these things…
Starting 10 years ago, you started to see amazing results. And then after the transformer paper, now we’ve got another version of it, which is, fill in the blank or predict what comes next or predict what came before. And these are the transformers and all the chat bots that we have right now. It’s amazing. I wish we all understood, in more detail, how they do the things that they do. And we’re starting to understand it. It all depends on the training set, and it also depends crucially on a stationary distribution, right?
So the reason all this works on is it a cat or not a cat is that cats change very slowly in evolutionary time. They don’t change from day to day. But things that change from day to day, such as markets, it’s a lot less clear how these techniques are gonna be powerful, but here they are. They’re doing amazing things.
We’re using this in my firm, and we’re using it in production, and we’re deeply aware of all the risks, and we have a lot of policies around it. It reminds me a lot of the early Wild West days of electronic trading where we’re authorizing a few of us to do some R&D, but very careful about what we put into production, and we’re starting with the easy things.
David: When other CEOs of large companies come to you for your advice, how are you advising them on how to deploy AI in their organizations? What’s the opportunity you see in the near term and in the middle or long term?
Marty: Really, first order of business, and this is something that we worked on at Goldman for a long time, and then I’m happy that we left Goldman in a place where it’s gonna be able to capitalize on GenAI really, really quickly, which is having a single source of truth for all the data across the enterprise, time-traveling source of truth. So, what is true today, and what did we know to be true at the close of business 3 years ago? Right? And we have all of that. And it’s cleaned, and it’s curated, and it’s named, and we know that we can rely on it because all of this training of AIs is still garbage in, garbage out.
And so, if you don’t have ground truth, then all you’re gonna do is fret about hallucinations, and you’re just gonna be caught in hallucinations and imaginings that are incorrect and not actionable. And so, getting your single source of truth right, that data engineering problem, I think a lot of companies have done a terrible job of it. I’m really excited about the new Gemini 1.5 context window, a million tokens like that one. I just wanna shout that from the mountaintops. Like, if you’ve been in this game and you’ve been using RAG, retrieval augmented generation, which is powerful, but you run to this problem of I’ve gotta take a complicated dock that references pieces of itself and chunk it, well, you’re gonna lose all of that unless you have a really big context window.
Breaking that quadratic time complexity…the length of the context window is just monumental. And I think over the next few months, you’re gonna see a lot of those changes. Problems that were really hard are gonna become really easy. I don’t know.
David: What’s your view on the government’s role in AI?
Marty: Well, one of the things that I learned during the financial crisis was a huge amount of respect for the regulators and the lawmakers. They have a really tough job and it is really important to collaborate with them and to become a trusted source of knowledge about how a business works. And I just lament the number of people who just go into a regulator and they’re just talking their own book and hoping that the regulator or lawmaker won’t understand it. I think that is a terrible way to approach it and has the very likely risk of just making them angry, right, which is definitely not the right outcome.
And so, I’ve been spending a lot of time with regulators and legislators in a bunch of different jurisdictions, and you already heard a bit of what I have to say, which is let’s please not take the approach that we first took with electronic trading. That approach was, write a big document about how your electronic trading algo works. And then step two was, hand that document over to a control group who will then read the document and assert the correctness of the algo, right? This is the halting problem squared. It’s not just a bad idea, it’s an impossible idea. And instead, let’s put a lot of emphasis, a lot of standards and attestations at all the places where there’s a real-world interface, especially where there’s a real-world interface to another computer, right?
So, the analogy is, in electronic trading, there was not a lot you could do to prevent a trader from shouting into a phone an order that would take your bank down, right? How are you going to prevent that from happening? Right? But what you really worried about was computers that were putting in millions of those trades, right? Even if they were very small, they could do it very fast, and you could cause terrible things to happen.
And so, another thing I’m always telling the regulators is please, the concept of liability, right? They start with this idea, “Let’s make the LLM creators liable for every bad thing that happens with an LLM.” To me, that is the exact equivalent of saying, “Let’s make Microsoft liable for every bad thing that someone does on a Windows computer.” Right?
They’re fully general, and so these LLMs are a lot like operating systems. And so, I think the regulation has to happen at these boundaries, at these intersections, at these control points first and then see where we go. And I would like to see some of these regulations in place sooner rather than later. Unfortunately, the pattern of human history is we usually wait for something really bad to happen and then go put in the cleanup regulations after the fact, and generally overdo it.
That was the history of Dodd-Frank. Like, we don’t really know what went wrong in the financial crisis, so let’s just go regulate everything. And I think 99% of it was red tape that did not make the world a better place. And some of it, such as the CCAR regulations, was profound and did make the system safer and sounder. And I would want us to do those things first and not just the red tape.
David: Talk through the implications that you’re seeing for generative AI in life sciences and biotech.
Marty: Well, it’s epic, isn’t it? Right? So, I had an amazing moment just a couple months ago. I had the opportunity of being the fireside chat post for Jensen of NVIDIA at the JPMorgan Healthcare event, and there was a night that recursion was sponsoring. And we really talked about everything he learned from chip design. So, Jensen, incredibly modest, will say, well, he was just the first in that generation of chip designers who were the first to use software to design chips from scratch. And it was really the only way he knew how to design it. And he likes to say that NVIDIA is a software company, which it is, right? But that seems kind of counterintuitive. It’s supposed to be a hardware company.
And he talks about the layers and layers of simulations that go into his business. Those layers do not go all the way to Schrodinger’s equation, and we can’t even do a good job on small molecules, right, solving Schrodinger’s equation for small molecules. But it does go very low, and it goes very high to, what algorithm is this chip running, and that’s all-software simulation. And he said in that chat that, at some point, he then has to press a button that says, “Take this chip and fabricate it,” and the pressing of that button costs $500,000,000.
And so, you really wanna have a lot of confidence in your simulations. Well, drugs have that flavor, very much so, except they cost a lot more than $500,000,000 by the time they get through phase 3. And so, it seems obvious to all of us that you ought to be able to do these kinds of simulations and find the drugs.
Now, the first step is gonna be just slightly improving the probability of success of a phase two or phase three trial, that’s gonna be incredibly valuable because right now, so many of them fail in their multibillion-dollar failures. But eventually, will we be able to just find the drug? Right? The needle-in-the-haystack nature of this problem is mind-blowing. There are, depending on the size of the carbon chain, but let’s just pick a size, there’s about 10,000 trillion possible organic compounds, and there are 4,000 approved drugs globally.
So that’s a lot of zeros. And if AIs can help us navigate that space, that’s gonna be huge. But I’m gonna bet that we will map biology in this way. It’s just biology is so many orders of magnitude more complicated than the most complicated chip. And we don’t even know how many orders of magnitude and how many layers of abstraction are in there. But the question is, do we have enough data so that we can train the LLMs to infer the rest of biology, or do we need an awful lot more data? And I think everybody’s clear we need more data.
David: Well, Marty, thank you so much for your time. Always a pleasure. You’ve had such a fascinating career, and we really appreciate you spending time with us.
Marty: David, great talking with you. Be well.
“In the Vault” is a new audio podcast series by the CFI Fintech team, where we sit down with the most influential figures in financial services to explore key trends impacting the industry and the pressing innovations that will shape our future.