There’s much more to generative AI than heroic avatars and tongue-in-cheek art. The potential applications are widespread, from drug discovery and therapy to writing, game development, education, ecommerce, and more. Last week, five CFI partners met on Slack to debate some big questions: What tasks can AI really tackle and how do we measure complexity? What’s more important, the quality of the output or the emotional resonance? Is AI capable of producing literary masterworks, or is it more of a “magic trick” for spinning up passable fan fiction? And is AI the solution to loneliness?
Connie: Looking at the app store in the U.S. today, 4 of the top 6 free apps are generative AI apps. So many developers are productizing the technology. I think it shows that we’re in the earliest innings for apps, it’s not a winner take all, and that consumers are very hungry for products that help them express their creativity. Twitter avatars show how fast adoption can spread.
Matt: Avatars have been a great consumer test case so far. But now we need to develop a clear sense of what AI is good at (fault tolerant, low cognitive complexity, etc,), and what it does poorly. That will be key to figuring out future use cases.
Jack: There are some use cases that are well suited for AI (“writing prose”) and others that are not (“reversing words”), but overall I think tasks that are complex to grasp, like deep math or code, are some of the last dominos to fall for Large Language Models.
Jack: Content is pretty robust to error, as there is usually a human choosing to use it or doing post-production. The games team created a chart for key content use-cases:
Vijay: Yeah, with DALL-E, we can make five images and pick the best.
Jon: Right. That’s why we’re seeing game developers across the industry incorporate generative AI in their art production pipelines. They’re mostly starting with 2D concept art for characters and environments, with the hope of extending to production-ready 3D assets one day. It’s a way of saving time and speeding the concepting process for human artists.
Matt: Do you all think complexity is objectively measurable? It seems like right now we define it retrospectively, based on what computers can do.
Jon: Another potential measure for complexity is how many humans need to be involved in creating something. So if you look at text, one person can write a novel or poem, whereas you need an entire team of humans to make a movie, and even more with games.
Matt: This is the “how many people does it take to screw in a lightbulb” metric.
Vijay: Well, not all humans are equal…but maybe it’s some combination of both.
Manhattan project = many people x high
IQ
Same for Apollo
Less so for lemonade stand
Jon: I like adding in some IQ metric because it reflects how AI is assisting human creation. So it’s not a “replacement of bodies” as much as it’s helping current creators be more effective with their time and resources. Whereas previously, a person might have written a single page in a weekend, now that person can write an entire short story draft, leading to faster iterations, cycle times, and hopefully a better product.
Matt: Part of the issue is that AI models are really good imposters.
Vijay: So what would need to be added for AI G? What would AI do that we can’t? And would it have to be measured differently?
Matt: I’m not sure yet. We have a sense of computational complexity for most CS problems, which measure algo scaling. But the best known algo also becomes kind of shorthand for the problem itself. It’s a great question: What is actually tractable for AI, but not humans?
Vijay: Maybe that is more “resolution” than “complexity.” You could watch the Godfather at 240p and it would be great.
Matt: The Toy Story example is interesting. It suggests that generating images/3D meshes is not super complex. We now have models that can do it. (Of course, a year ago, we would have come up with some rationale for why generating images is hard, too.) But the narrative, message, characters, etc. are an abstract thought process that requires relating to the viewer in a specific, coherent way.
Jon: I agree with @Connie. The key thing is how AI drives better products for developers and consumers. What’s the next “Toy Story” that is uniquely enabled by generative AI that couldn’t be done previously?
Jack: In games, I look at Spellbrush’s Arrowmancer or Latitude’s AI Dungeon as examples that are uniquely enabled by AI.
Jon: AI shows promise is in breaking this triangle theory. Previously, you could develop a good game quickly but it would cost a lot of money, or a good game on a budget, but it would take a long time. Red Dead Redemption 2 famously took over $500M and 8 years to make, much of which was the art/production budget.
Vijay: I think that basically works if the cost of AI -> 0 (relative to people)
Jon: I think so! Instead of having one masterpiece come out every eight years, we could have smaller teams ship games faster, at a quality bar and scope that’s close.
Jack: I also think there will be entirely new genres of games built! Think of the innovation that Minecraft’s infinite world allowed. We’re starting to see next-level multi-user dungeons akin to AI dungeons or Dungeon Master tools that give DMs unlimited content at a moment’s notice.
Jon: Yes, exactly! When it’s easier for consumers to create user-generated content, that drives more creativity and new types of content. Just look at all the weird new video genres that TikTok created, and YouTube before that. We’ve been looking for the “YouTube for Games” for a while. Maybe it actually starts with generative AI.
Matt: This goes back to Vijay’s point around fault tolerance. An indie animation can be totally weird in some ways, whereas 3D models for a game have to be precise.
Jon: Actually, I think fan fiction and mods are some of the most creative work out there. Some of the very best new games have come from mods: DOTA -> League of Legends, H1Z1 -> PUBG, etc
Matt: So maybe there’s an opportunity in games to use AI in structured ways? i.e. a generic mod framework of some kind.
Jon: Yeah, I love the idea of a game that incorporates gen AI into its modding tools, like an AI-powered level editor.
Jack: What’s fun about an AI-powered level editor is that generative AI is useful across all aspects. A door is an asset with a script to make it open/close, and has an associated sound, each of which can be informed by what’s around it and the game state (almost like 3D outpainting)
Vijay: Here’s an example of me playing around with fan fiction:
Connie: There are limitations, but that’s darn impressive, @Vijay!
Vijay: The latent space is impressive in some ways, but not as amazing as it seems. It’s clear there are patterns. It’s a lot like a magic trick. When you don’t know how it works, it’s amazing, but after you’ve seen the trick a few times, you see it’s more a lie than magic.
Vijay: Yeah, there are archetypes in stories. I played around with ChatGPT on this: in comparing Godfather:Entourage:Curb, there were obvious character archetypes (Michael, Sunny, Fredo, Tom) that got mirrored (Vince, Drama, Turtle, E) over and over again, from show to show. The latent space for characters is clear. And something analogous for plot.
Jon: I heard from a writer once that every great story is 80% old, 20% new. Take familiar stories or archetypes that everyone loves and just add your own 20% unique spin on it.
Vijay: And frankly, how many plots are there? Rocky/Star Wars/etc. are kind of the same movie. Die Hard/Die Hard on a boat/Die Hard on a plane
Jon: ChatGPT could probably predict the next few Marvel blockbusters.
Matt: Now this feels like progress on defining complexity. Size of the (deduplicated) search space.
Vijay: Yeah, the information theory definition of complexity would speak to the log_2(bits) in the latent space, and that’s probably about right here.
Matt: That would be interesting to compare across models and problem domains.
Matt: I’ll take that bet. I say AI friends won’t become much more popular than they are now.
Connie: Sometimes a best friend’s role is to tell you what you don’t want to hear…
Jack: People today are spending more time alone.
Matt: The solution to loneliness can’t possibly be to download another app on your phone and talk to the computer.
Jon: The vision is that AI friends become incorporated into your daily life. They’re just another contact whom you can text, call, or even bring into the real world through AR filters.
Connie: There’s a top trending dating app in China right now where you can feed the app some text messages and it spits out really good replies.
Jack: One of the best applications of AI friends is that it lowers the barrier of entry for social interaction for the socially anxious. Essentially you get “social reps.”
Matt: I can totally see the benefits. It just feels like we’re sick patients: when one drug (social media) creates a side effect, we take another one to fix it.
Matt: That’s fascinating. Think there are some common tropes in psychoanalysis? FreudBot?
Jack: Even the early versions of chatbots, like ELIZA, were built to mimic psychotherapists. They’re designed to mirror your words and ask you important questions.
Vijay: Yeah, the question is whether people would care that it’s a computer. So far, with even far simpler tech, people seem OK; they just need connection.
Jon: I imagine another benefit of AI therapy is that your therapist is always available 24/7. No need to wait a week to book a session.
Vijay: Yeah, availability is key. Also asynchronous communication is huge for many people.
Matt: Affordability, too
Jon: Right, the same benefits apply to coaching and education. An AI coach could be available whenever you needed it.
Connie: I think AI personalities will show up in customer service and support, tutoring, nursing, coaching, and other things where 24/7 coverage and automated follow-ups are important. I definitely think the market will pull us here. In areas where there are labor shortages, someone will develop something with AI that can help.
Jack: I predict there will be a massive class divide between those who utilize AI tools and those who do not. Gen Alpha will be best suited for it, given their plasticity. Being able to use AI will be the most important skill of the decade, and it will require a complete restructuring in how most people think.
* * *
Connie Chan is a General Partner at Andreessen Horowitz where she focuses on investing in consumer technology.
Matt Bornstein is a partner on the enterprise team at Andreessen Horowitz, where he focuses on new data systems and technologies underpinning artificial intelligence.
Vijay Pande is the founding general partner of the Bio + Health team at Andreessen Horowitz, focused on the cross-section of biology and computer science.
Jonathan Lai is a general partner and founding investor of CFI GAMES at Andreessen Horowitz, where he focuses on investing at the intersection of games and consumer, social, web3, infrastructure, and fintech.
Jack Soslow is a partner on the CFI GAMES team at Andreessen Horowitz, where he focuses on games infrastructure, AI, and AR/VR.