AI + CFI

Scoping the Enterprise LLM Market

Naveen Rao, Matt Bornstein, and Derrick Harris

Posted April 12, 2024

Naveen Rao has been building artificial intelligence technologies and companies for more than a decade. He founded Nervana Systems (acquired by Intel) and MosaicML (acquired by Databricks), and now serves as vice president of generative AI at Databricks. From chips to models, there are few people with a better pulse on how enterprises are using AI.

In this inaugural episode of the AI + CFI podcast, Rao joins CFI partner Matt Bornstein and CFI enterprise editor Derrick Harris to discuss where we’re at in terms of large language model (LLM) adoption, as well as how LLMs will influence chip design and software refresh cycles. He also shares some of his personal story of watching AI technology — and awareness — grow from fringe movement to mainstream phenomenon.

Here are just a few of the many highlights from Rao:

[6:21] “I think the transformer being a standard paradigm is a good thing for hardware vendors, for sure, because it gives them an opportunity to actually come into the game. And that’s what we’re going to see this year. And that’s why I think this year is where we’ll actually see some competition [for Nvidia].

“Is it [good] for the industry? I think it’s a bit of an over over rotation on the architecture — for now —but that’s just how these things go. We’ve got something that works [and] we keep chasing it. I think whatever [is next], it’ll have to be some sort of a modification of that paradigm to move forward.”

[8:26] “In Nervana, for instance, we were very focused on a particular set of primitives, like multilayer perceptrons and convolutional nets. But then we had things like resnets, which are convolutional nets, but have a different flow of information. That presented some issues. We changed the way we potentially will do convolutions: Can we do them in the frequency domain instead of the time domain? That actually changes the motifs again.

“A lot of that kind of worked in favor of something like a GPU that was a little bit more flexible. But now that we have something that’s more stereotyped, like with the transformer, it gives us an opportunity to go build something a little bit less flexible, but more performant.”

[18:35] “Now, everywhere, we’re seeing undergrads who come out of schools like Stanford or Berkeley or whatever, who understand a lot about an LLM and how to tune in and make it do what they want. They know IFT, SFT, RLHF — they know all this stuff now, at least conceptually. So I think the talent is getting to a point where it’s proliferating into many enterprises. It’s just, you’re not going to see the density [as inside a large AI research lab]. You’re not going to see a hundred-person infra team in these lines of business; you’re going to see a five-person infra team. So they need tools that abstract stuff.”

[26:37] “[T]hat’s the paradigm that shifted in my mind . . . pure supervised learning required you to go and build a very high-quality data set that was completely supervised. That was hard and expensive, and you had to do a bunch of ML engineering. So that didn’t quite take off, it was just too hard.

“But now we can get this sort of smooth gradation of performance, where I say, ‘Well, I have this base model that’s pretty good — understands language, understands concepts. Then I can start to layer in the things that I do know. . . And if I don’t have a ton of information, that’s OK. I can still get to something which is useful.'”

[39:08] “[I] went back to get a Ph.D. in neuroscience for the reason of: Can we actually bring intelligence to machines and do it in a way that’s economically feasible? And that last part is actually very important, because if something is not economically feasible, it won’t take off. It won’t proliferate. It won’t change the world.

“So I love building technologies into products, because when someone pays you for something, It means something very important. They saw something that adds value to them. They are solving a problem that they care about. They’re improving their business meaningfully — something. They’re willing to part ways with their money and give it to you for their product. That means something.”

If you liked this episode, you can also listen to the other episode we published this week: Making the Most of Open Source AI. It was recorded during a panel discussion with Jim Zemlin (Linux Foundation), Mitchell Baker (Mozilla), and Percy Liang (Stanford; Together AI), and was moderated by CFI General Partner Anjney Midha.

More About This Podcast

Artificial intelligence is changing everything from art to enterprise IT, and CFI is watching all of it with a close eye. This podcast features discussions with leading AI engineers, founders, and experts, as well as our general partners, about where the technology and industry are heading.

Learn More