In early 2024, the notion of high fidelity, believable AI-generated video seemed a distant future to many. Yet, a mere few weeks into the year, OpenAI unveiled Sora, its new state of the art text-to-video model producing videos of up to 60 seconds. The output shattered expectations – even for other builders and researchers within generative AI – sparking widespread speculation and awe.
How does Sora achieve such realism? And are explicit 3D modeling techniques or game engines at play?
In this episode of the CFI Podcast, CFI General Partner Anjney Midha connects with Stefano Ermon, Professor of Computer Science at Stanford and key figure at the lab behind the diffusion models now used in Sora, ChatGPT, and Midjourney. Together, they delve into the challenges of video generation, the cutting-edge mechanics of Sora, and what this all could mean for the road ahead.
The CFI Podcast discusses the most important ideas within technology with the people building it. Each episode aims to put listeners ahead of the curve, covering topics like AI, energy, genomics, space, and more.