Human nature fears the unknown, and with the rapid progress of AI, concerns naturally arise. Uncanny robocalls, data breaches, and misinformation floods are among the worries. But what about security in the era of large language models?
In this episode, we hear from security leaders at OpenAI, Anthropic, and Google DeepMind. Matt Knight, Head of Security at OpenAI, Jason Clinton, CISO at Anthropic, and Vijay Bolina, CISO at Google DeepMind, are joined by Joel de la Garza, operating partner at CFI and former chief security officer at Box and Citigroup. Together, they explore how large language models impact security, including changes in offense and defense strategies, misuse by nation-state actors, prompt engineering, and more. In this changing environment, how do LLMs transform security dynamics? Let’s uncover the answers.
The CFI Podcast discusses the most important ideas within technology with the people building it. Each episode aims to put listeners ahead of the curve, covering topics like AI, energy, genomics, space, and more.