At the CFI American Dynamism Summit, Senator Todd Young (R-IN) spoke with CFI General Partner Martin Casado about the importance of open innovation and American leadership in AI, and why we need to support AI research at all levels — from the classroom to the war room.
Here is a transcript of their conversation:
Martin Casado: Great. Well, we have a lot of stuff to cover on. We’re going to cover, in general, how policy intersects with AI and how we’re thinking about it, and how do we keep the United States ahead? As part of that, you had this Senate AI Insight forum. I don’t think everybody knows what that is, so I thought maybe you could first just kind of describe what that was. Then we can talk about what the follow-up has been.
Sen. Young: Sure. Martin, thank you so much for having me. This is a great event. I understand your speakers shared some great information today. But for those who may not be familiar with how Congress typically does its business, when we’re really following the normal processes, we work through the committee process.
So if we had gone the traditional route in trying to address artificial intelligence issues, that would have meant that a number of committees of jurisdiction would have started to hold hearings on artificial intelligence in a very public way. Members of their staff would have prepared… hours before each given briefing, and we would have learned what we could from those experts who briefed us over the course of two or three hours. But oftentimes, members aren’t able to dive deeply into the essence of various policy issues when we take that approach.
So instead, Senator Schumer, working with three other senators, decided to deviate from the process. Instead, we held first three briefings and then nine, as you mentioned, AI Insight forums. These were roundtable events, long tables full of talented innovators, entrepreneurs, policymakers, and others. No cameras were present, as is typically the case in our hearings.
So there’s a lot of candid conversation on dedicated topics. We held insight forums on national security, alignment, innovation, and other topics. And members were invited to witness these guided conversations, take extensive notes, and then after that, we have a lot of work products that we can deal with, recommendations, policy concerns, and other things that will lead to actual committee hearings and legislation being proposed as is traditionally done. So those were really constructive. Members gave a lot of positive feedback about it. We learned a lot in the process.
Martin: What are your key takeaways having sat through them?
Sen. Young: Well, two really. I would say the first is just a general sense that this isn’t a particularly partisan issue. I would say one could probably detect slight differences in approaches when it comes to regulation versus innovation as you consult with members on either side of the aisle.
But as it related to this topic, and this gets into the second issue, there was a general embrace by my colleagues after they heard from some of the world’s best minds that we needed to regulate but it would have to be a light touch approach. We had to be very careful not to go overboard and constrain what is right now a sort of leading-edge industry and the United States is in the lead. We want to keep it that way.
Martin: Yeah, do you think we can expect some legislation framework come out of this? Is there a time frame, or is this still kind of too early to…
Sen. Young: Well, I think we can. My hope is that we can expect all kinds of legislative efforts, not just this year, but in coming years. I don’t think we have to rush to do everything at once. We need to get clarity on a lot more of people’s concerns, but more importantly, what they perceive to be their opportunities as the technology evolves, as we learn more about the technology, as new versions are released. So sort of a wait-and-see approach.
I do, however, think that there will be some things we agree on now. We agree that there needs to be some resident expertise within the White House on an ongoing basis that other departments of government look to when it comes to issues surrounding artificial intelligence. I think we need to revamp our human resources approach so that we embed more expertise in every agency of government and we can dialogue with businesses, consumers, and others about things within a department’s field that touch on artificial intelligence.
And then there’ll be some things, as you heard about earlier today, that deal with national security where we’re already behind. We need to make some key investments in people and platforms. And so all of those things are among the things that I think in the coming months will be addressed through the committees of jurisdiction in a bipartisan way if politics doesn’t get in the way. And as it relates to most of this, I don’t think it will.
Martin: So this is a bit of a personal story. So I did my undergrad in Flagstaff, Arizona, small Arizona town. And at the university, which is not a notable university, there was a lot of investment from DoD in supercomputing and they’ve bought supercomputers and I worked on them as a researcher, and that led me to a job in Lawrence Livermore National Labs. And then at Lawrence Livermore National Labs, I worked in the weapons program. I worked on these huge supercomputers with a bunch of other people that came from similar paths. And I remember thinking at the time, isn’t it amazing that we live in this country where people understand there’s new technologies that are very powerful and they invest all the way down to Flagstaff, Arizona, to stay ahead and find the people that are very interested in doing that.
It feels to me having gone through that and then having gone through, also, the intelligence community, which also embraces the internet and a lot of the network technologies, it feels like we’re in a bit of a doctrine shift now that when new technologies come we are more afraid of them than wanting to get ahead of them. And so, I just wanted to ask you, is my perception correct that there had been a bit of a doctrine shift in the United States, where we worry about the implication of technologies before we’re able to actually harness them and become leaders in them, or is that maybe just a perspective from the outside?
Sen. Young: I think it may be fair. I think this is one of the reasons these AI Insight forums were helpful to sort of keep certain colleagues honest that might have been ready to move forward aggressively with various regulatory actions. I can’t promise that won’t happen in certain discrete areas. But I think there’s a tendency to catastrophize different technology areas as opposed to different potential outcomes.
Historically, one of the many reasons we’ve been a dynamic country and created a favorable regulatory atmosphere for our technology-developing businesses is we have laws that reflect our values and we apply those laws to technologies only if they run afoul of certain prohibited behaviors that normally don’t even mention the technology, right? And if we can take, at least for the most part, a tech-agnostic approach as it relates to AI development and adoption, I think that that would be more helpful. So no special carve-outs, no special, you know, benefits, except when explicitly those rare instances where there’s a compelling argument that we need to, right?
Martin: So I got a sense… You know, so I went to, you know, one of these AI meetings, and there was a lot of concern around AI. And I listened to the concern. I was like, “Wait, that’s not AI. That’s all computers.” I was like, “That’s not AI. That’s the internet, as someone that’s been in these things for a very long time. And so I’ve gotten the sense that maybe there’s a response to the prior battle. People felt like we didn’t get it right with the internet, and therefore they’re trying to use that consternation now but for a different technology.
Sen. Young: Well, I think that’s absolutely happening. It’s happening with respect to data right now.
Martin: Well, let me tell you what the danger is. So having been through the rise of the internet very closely, there were things that were very unique to the internet, right, things like asymmetric attack, right? The more that you invested in the internet, the more vulnerable you were. And if you’re dealing with a terrorist threat, that’s a big deal because they didn’t have infrastructure to take down. We did. So that’s a very particular thing that impacted, like, you know, defense. You know, there’s other things like exponential growth, things can get out of control and it’s everywhere. There’s these huge implications. I will say they actually don’t exist with AI, but somehow people are imbuing AI with it.
And so I’m just wondering if, like, you know, there’s maybe a lack of literacy or maybe people feel like they didn’t really do it on the previous war and this is being applied now. And if that’s the case, what can we do to educate or kind of get ahead of it?
Sen. Young: Well, this is helpful. You’re in the right town, right, but to the extent some of you have not been visiting with members of the Senate, members of the House, who will soon be considering… You know, we’ll be playing with live rounds, as we said in the Marine Corps, right? We’ll be considering actual bills, we’ll be developing bills, and we’ll be voting on them.
And there will still be some members who haven’t done extensive homework and carry superficial views of what constitutes artificial intelligence technology. And we need all of you to disabuse them of this being, you know, an extension of social media. This is very different.
And data, we’re going to have to think about data perhaps in a different way. So I do think that [00:10:00] some are either confused. There are others who are just probably using the opportunity of this legislative effort, which is ahead of us, to try and pass bills that they had prepared two decades ago, right?
Martin: Yeah, yeah, exactly. Right. So it’s like now’s my time.
Sen. Young: I’ve done it in other contexts, but if we do it in this context, it could be quite damaging.
Martin: And how much has the economic reality hit the calculus when it comes to these discussions? Because, listen, none of us really know the future. I will tell you when economics change this much, right, when the marginal cost of creation goes to 0, this is gonna be another 10x expansion in value for sure. And historically, the U.S. captured that, right? That’s why, you know, the internet age was so great for us, right? And so, like, is that part of the calculus, or is the focus really on, like, how do we keep this stuff from hurting us?
Sen. Young: We tried to make this part of the calculus. In fact, we spent at least half of the time, perhaps we should have spent even more, on the AI Insight forums discussing the upsides. It’s not as though we had to address the upsides. The upsides are going to be a result of innovators and entrepreneurs and investors. We understand that. But I thought it was very important for us to focus on that to provide some balance to the staff members and my colleagues who were present. So constantly reminding people of that I think is going to be very important.
Listen, there are I think what you may have characterized in the past as some fantastical doomsday scenarios, and there are a lot of those. And there may even be a couple that are real and we need to hedge against, right? And we need to take those seriously, but we cannot become so fixated on those that we lose the forest from the trees. And, you know, we can do this. We can do this. It’ll be complicated. It’ll be challenging. We won’t get it all right in the beginning. And we shouldn’t try and tackle everything in the beginning, which is a bit of a concern, but I think we’ll get it.
Martin: It’d be great for you to talk about, like, on the positive side, your takeaways from the discussions on how AI can benefit America. That’s not just strictly economic, like, you know, from an innovation or a tech or daily life or whatever.
Sen. Young: We can rapidly… And it’s very difficult for us to quantify some of these things. I can sound very specific and learn it by just throwing out figures. Within X years, we’ll solve cancer, but no, we’re going to rapidly compress the treatment and drug development timeline, something I’ve heard from our pharmaceutical makers. We will come up with drought-resistant climate-change-resistant crops to help feed the world, become more productive, drive down the cost of food. We’ll develop the ability to have tailored, personalized tutor services and mental health services. Vastly expanding the workforce in those areas, leveraging technology so that we’ll have mental health providers in areas that we don’t.
We can clean up the environment in all sorts of creative ways, ways that our artificial intelligence technologies will…you know, they’ll illuminate some of the solutions to hard challenges. We’ll have ways of reading health records so that we can discern probabilities of getting certain types of infections, certain prophylactic measures we can take to extend, improve, and save people’s lives. We don’t have the benefit of those AI sort of insights. We can decrease the rate at which mistakes are made within a medical healthcare context. So I mean, it goes on and on. And this is what we should be talking about.
Frankly, I’m sitting on a commission right now pertaining to synthetic biology. And so much of advanced biology these days really involves the use of artificial intelligence technologies. And so I have charged the commission with educating me and members on specific things, innovations from material science to medicine, to environmental health that I can start touting because that’s going to be very, very powerful for my constituents and also for colleagues.
Martin: You know, as an outsider, it feels to me like the general discourse is getting more sane, and so I’ve actually gotten a lot of faith in the process.
Sen. Young: Well, I tend to make people feel that way.
Martin: With you, but I mean in general. I just meant the national discourse. I appreciate it. But the natural discourse, it does feel like…
Sen. Young: I just thought you meant in this conversation.
Martin: Also this one, too. Yeah.
Sen. Young: Okay.
Martin: But no, I think people are becoming much more reasonable about it and so forth. And I just feel like you’re one of the people that’s really been advocating for AI research. I think it’s the right thing to do. Thank you very much for doing that. Is your job getting easier? Am I right? Or has it shifted noticeably since it started, say, two years ago?
Sen. Young: You know, I would say we were able to foresee the need for more research dollars as it relates to artificial intelligence. Myself, Senator Schumer, who helped get signed into law the CHIPS and Science Act. We made a massive allocation for additional federal research through places like the National Science Foundation and Department of Energy. It wasn’t easy to get that included with the CHIPS piece, but arguably that will even be more consequential towards our economic security and our national security in the longer run than will the CHIPS piece.
So that authorization will allow us to now appropriate money if members of Congress can be persuaded for AI. And so I feel like half of that hard work is done. And we’ve got a lot more work to do, but I think… What was the second part of your question? I’m sorry.
Martin: Well, I was just wondering if your job was getting easier, but I’m so glad you brought the CHIPS Act. So I’d love for you to answer, like, you know, is your job getting easier? Because it seems to me that, like, there is a lot of rationality in the discourse. I would love to get back to the CHIPS Act because I think it’s probably the most significant piece of legislation in the last 50 years for innovation. So we’ll do that next.
Sen. Young: Actually, that’s where I was headed on the point. Thank you for centering me, again. But the CHIPS Act. Yeah, so the job’s getting easier because we passed the CHIPS Act. It’s not just the authorization of research, but we’ve already made the argument, very recently, that something that falls outside of the direct DoD context, as most of the CHIPS and Science Act does, is indeed a national security investment. So we got the institutional muscle memory used to that notion. That is ahistorical in modern history. My colleagues typically don’t think of an investment in the National Science Foundation as a national security investment, but it is.
And then we also were able to make the case to my colleagues that by making these sorts of critical investments in research and in next-generation technologies, as we did through the CHIPS and Science Act, it’s going to lead to a lot of economic growth. And we were able to personalize that argument state by state. So that’s the very same argument that we have for this situation, artificial intelligence.
Martin: So this is going to be repeating a little bit, and I apologize. I just think this is so important. So many of the D.C. insiders here understand the CHIPS Act and why it’s important. A lot of people here that come from the investment committee or founders or whatever probably don’t understand the implication. I view it as literally one of the most significant pieces of legislation in 50 years on innovation. It’s a huge, huge, huge benefit. It was the right thing to do. And so if you could just take just a couple minutes to describe the highest level for those that are not insiders what it is. I think it’s so important to get this message out.
Sen. Young: So the CHIPS and Science Act was…at once it was a $53 billion investment in both research and incentives for the semiconductor industry. The incentives would be to reshore some of our manufacturing capacity so that our supply chains would not be as vulnerable to interruption as they, of course, were during the global pandemic, and they could be, God forbid, if there was a geopolitical effort to interrupt those supply chains, say, by the Chinese Communist Party making aggressive actions towards Taiwan.
It was also a national security effort because, of course, we need microprocessors and we need our own radiation-hardened domestically manufactured microprocessors to go into nuclear weapons, to go into our radar systems, and all manner of other things. So that was the microprocessor piece, and we have been in the process now where some of that money is starting to flow. Over $200 billion [00:19:00] of private capital has been invested, and we’re not even at $1 billion, even close to it of federal monies that has been released. So it’s paying handsome dividends. The market is responding, and we’re becoming less risky in our supply chains. The idea was not to become independent of other countries.
And then there’s this whole other and Science piece, which for the purposes of this conversation, big investments in research, which I mentioned earlier, not just in artificial intelligence, but that research can also flow to hypersonics, quantum computing, synthetic biology, autonomous systems, and other areas far upstream, of course, but a bit more applied research than the curiosity-driven research that most people tend to associate with the NSF.
Martin: Yeah, that’s great. So a number of investors are like these free markets solve everything and like this kind of almost libertarian bent. I am not that. You know, I worked at Livermore. I worked for DOE. I worked for two war efforts. Like, I’m a huge believer in the government. I’m a huge believer in actually national institutions and involvement. That said, there’s been an ongoing dialogue with, like, what is the right roles between kind of private and public partnerships for things like AI? Like, what is the right balance? And so I’d love your view on how we can work together on this, where the government stops, you know, where the free markets pick up. How do you think about that as we go forward?
Sen. Young: Well, I look to history. I look at what we did in the Space Race. I look at the innovations that have occurred through our DOE labs like the one you worked at, like the one that developed fracking technology before the frackers claimed it their own, right? I mean, so many innovations have been earned off of the toil of our researchers in our DOE labs, through our land-grant colleges, and the other constellation of research agencies. So we need to keep making those investments.
Over the years, the federal government started to pull back from research that wasn’t curiosity-driven, wasn’t basic research, theoretical sort of research as opposed to more applied. We need some more applied research. I think that’s one of the lessons we’ve learned in recent history.
Beyond that, we need clear rules of the road, clear regulations. Those regulations ought to typically be, and I use my words very carefully here because there are exceptions to everything, but they typically should be technology-agnostic so that we don’t favor or constrain different types of technologies within the market. That’s investors, that’s business people, that’s consumers. And then we’re going to have to work with our partners and allies on development of standards for some of these what I’ll call platform technologies. There aren’t many. Artificial intelligence and synthetic biology are the ones that really come to mind.
So there’s a diplomatic component that if we want our values embedded and follow on generation AI technologies, we better develop and design those technologies here, but we also need to embed the standards that we have as it relates to privacy, consumer protection, and other things into those technologies rather than leaving it to, I’ll pick on the Chinese Communist Party again.
Martin: Yeah, perfect. Okay, so we have the last minute. So I only thought it was fair because when I was at the Insight forum, you asked me this question. So I’m going to ask you two questions that you asked me. The first one, what would…
Sen. Young: It may be a hard question.
Martin: It’s not. What would you give the chances of P doom?
Sen. Young: The probability of P doom?
Martin: The probability of P doom.
Sen. Young: This is colored by my optimism, right? But low. You didn’t ask me to make it qualitative. Yeah, low, but not non-existent, which is how most of the, you know, doomsday scenarios are. They’re low probability, high cost, and we need to hedge against them. But let me say this, the first step in hedging against them is to really seek to understand them, to study them a lot better, so that before you take more costly action constraining the opportunity cost for humanity ahead, we really know what we’re talking about.
Martin: I love that. One last very quick question with my flair. Do you think that P doom, where P doom is a probability that we as humanity has a catastrophic event, do you think it’s greater with or without AI?
Sen. Young: I think in the short run… You’ll see I’m not pandering.
Martin: No, I love it. This is great.
Sen. Young: I think here’s what we’re probably going to have. We’re going to have, in the short run, some unsteadiness, right, because we’re trying to learn countermeasures. We’re trying to figure out how systems work. We’re trying to figure out all sorts of things. And then in the long run I think we can actually push the risk down.
Martin: A hundred percent.
Martin: I agree. Awesome. Thank you so much. You were wonderful. Thank you so much, everybody.