In this episode of the AI + CFI podcast, CFI General Partner Anjney Midha shares his thoughts on how hardware for artificial intelligence might evolve over the years to come as we place more emphasis on AI inference workloads. Improvements in sensors, chips, models, and more could result in remarkably useful models that are able to run locally, resulting in wearable AI products that evolve the human experience.
Here are some highlights from the episode:
[5:04] “If you go all the way back to the first neural networks in 1958, that’s when we start to see reasoning start to happen through computers. . . And then now we’re at this next phase of massive transformers. . . That’s one lineage, which is the lineage of reasoning and intelligence.
“In parallel, we’ve had this other lineage in computing, which is interfaces. And you started with the command line and keyboards. And then that gave way to the GUI with a mouse . . . And then that has led ultimately to mobile interfaces with touch as an input mechanism. . . The question is: Where are we going next?
“I think we have really good reasons to believe that the next interface will be an AI companion that’s some combination of text, voice, and vision that can understand the world. That’s almost a better predictor of where hardware is going, because the history of computing so far shown that whichever one of those lineages is undergoing a moment of resonance with with customers ends up dominating for the kinds of workloads that then get to scale for the next 10-15 years. And I think we’re in the middle of both a reasoning and an interface shift, and that’s what’s exciting right now.”
[18:55] “I think what we need to be paying attention to on the hardware side is: Are there really low-cost form factors that startups can innovate around because there’s an existing supply chain that an incumbent, like an Apple or a Google has essentially subsidized at scale over the last decade? And that’s what’s so exciting about things like the Apple Vision Pro: When Apple gets into the game, it results in second- and third-order effects of new supply chains showing up for sensors like depth sensors and LIDARs and passthrough mixed reality displays, that then give startups the license to go experiment with new form factors.”
[35:13] “There was just like decades of research at many DARPA- and DOD- and university-funded labs that was the neuroscience-first approach to inventing computers. That has proven to be mostly a distraction. It turns out that just predicting the next token or the next word a model should say is a remarkably useful way to attack intelligence and design computers, instead of . . . getting computers to learn like human beings. . .
“I will say what is now happening is that because transformers are so remarkably effective at what they do, most major industrial labs have doubled down on that architecture. It’s not clear whether that will result in multi-step reasoning of a kind that is essentially unconstrained. It’s not clear that the current architectures will get us all the way to the end goal.”
If you liked this episode, listen to our inaugural two episodes from April 12:
Artificial intelligence is changing everything from art to enterprise IT, and CFI is watching all of it with a close eye. This podcast features discussions with leading AI engineers, founders, and experts, as well as our general partners, about where the technology and industry are heading.