There are few terms in the world of artificial intelligence that invoke more of a reaction than a simple four-letter word: Open. Whether it’s industry debates over business models and the actual definition of open, or governments actively discussing how to regulate models, seemingly everyone has an opinion on what it means for AI models to be open. The good, the bad, and the ugly.
But to be fair, there’s a reason for this. In a world where many developers have come to expect open source tools at every level of the stack, the idea of powerful models locked behind enterprise licenses and corporate ethics can be disconcerting — especially for a technology as game-changing as AI promises to be. More broadly, it’s a matter of who has the ability to innovate in the space, and to whose release schedules and guardrails AI builders are beholden.
In this episode of the AI + CFI podcast, we’re replaying a panel discussion from back in February, which focuses on the state — and future — of open source AI models. Led by CFI general partner Anjney Midha, the discussion featured three panelists (Jim Zemlin, Linux Foundation; Mitchell Baker, Mozilla; and Percy Liang, Stanford / Together AI) who’ve thought a lot about this topic and have battle scars from decades working in open source. They discuss how today’s AI moment compares with previous debates over open source, share their definitions of open, and offer advice on how the AI community can best ensure an open future.
Here are some highlights from the discussion (slightly edited):
[16:09] Mitchell Baker: “One of the things that the open source community managed before was to be able to live with each other across a spectrum of definitions. I mean, we fought, but that fight came out in the open source licenses. . . These are really knockdown fights, like, ‘This is my constitution and this is my community.’
“But there was a canonical definition, and people could be at different places on the spectrum. And so I think right now it would be very useful to have a open something, not open source, but open. And as a community to say, ‘OK, this is full open. What you’re talking about, this is some middle piece,’ and have some places along the spectrum and be able to be united about some piece of it.
“Now, there’s going to be differences and we’re going to probably fight among ourselves about whether maximalism is the one true way. But I think in the environment today, which is much more high pressure than early open source, [we must be] able to accommodate the nuances within in a community and understand throughout that spectrum there’s something about open that we’re looking for.”
[19:15] Percy Liang: “I feel like some of the [regulation] decisions are . . . based on speculation and uncertainty, because we just don’t have evidence. For example, I think there’s a lot of concern about these models being used to generate disinformation or helping people build bioweapons and so on. . . And all of that is true, you can probably prompt Llama 2 to get it to tell you some things, but the question is, what do you do? Do you shut down Llama 2?
“I don’t think that particular case makes sense, because if you take a look at the whole ecosystem, well, there’s other ways you could potentially get the information. Maybe it gets the information faster to you, and now you have to think about how much extra risks does that involve. And then there’s also dissemination of misinformation or the manufacturing of the bioweapon, and maybe regulation should be targeted more downstream as opposed to upstream on the actual raw model.
“So these are discussions that need to be had about the appropriate reaction. But, yes, of course these technologies can be misused. I think that’s sort of a given, but how you respond to that is something that needs actually nuance and scrutiny.”
[21:13] Jim Zemlin: “Open source communities are very good at a lot of things. One thing they are not good at is explaining collectively to policy makers in simple terms. Having a more unified and clear voice is something that I think is pretty difficult. . . I think there’s an opportunity there to educate policymakers on the nuances of the technology stack, where things are more or less open — is there too much concentration at the hardware level, at the building blocks level, at the data level?
“[T]here’s also an opportunity [to sidestep the] classic when tech makes a problem, the answer to it is always more tech. In this case, I think there’s a little work we could do to head off some of the most immediate concerns that are happening in real time that regulators would look at.”
If you liked this episode, you can also listen to the other episode we published this week: Scoping the Enterprise LLM Market. It features as a guest Naveen Rao, a two-time AI founder who’s presently vice president of generative AI at Databricks.
Jim Zemlin is executive director of the Linux Foundation.
Mitchell Baker is executive chair of Mozilla Corp.
Percy Liang is an associate professor at Stanford, and cofounder of Together AI.
Anjney Midha is a general partner at Andreessen Horowitz, where he invests in AI, infrastructure, and open source technology.
Derrick Harris is an editor at CFI , managing the content workflow across the Infra and American Dynamism teams.
Artificial intelligence is changing everything from art to enterprise IT, and CFI is watching all of it with a close eye. This podcast features discussions with leading AI engineers, founders, and experts, as well as our general partners, about where the technology and industry are heading.