In this episode — cross posted from our 16 Minutes show feed — we cover all the buzz around GPT-3, the pre-trained machine learning model from OpenAI that’s optimized to do a variety of natural-language processing tasks.
It’s a commercial product, built on research; so what does this mean for both startups AND incumbents… and the future of “AI as a service”? And given that we’re seeing all kinds of (cherrypicked!) examples of output from OpenAI’s beta API being shared — how do we know how good it really is or isn’t? How do we know the difference between “looks like” a toy and “is” a toy when it comes to new innovations?
And where are we, really, in terms of natural language processing and progress towards artificial general intelligence? Is it intelligent, does that matter, and how do we know (if not with a Turing Test)? Finally, what are the broader questions, considerations, and implications for jobs and more? Frank Chen explains what “it” actually is and isn’t and more in conversation with host Sonal Chokshi. The two help tease apart what’s hype/ what’s real here… as is the theme of 16 Minutes.
The CFI Podcast discusses the most important ideas within technology with the people building it. Each episode aims to put listeners ahead of the curve, covering topics like AI, energy, genomics, space, and more.