Raising Health

Using AI to Take Bio Farther with Jakob Uszkoreit

Vijay Pande, Jakob Uszkoreit, and Olivia Webb

Posted January 11, 2023

Note: This episode was recorded as part of the Bio Eats World podcast, now known as Raising Health.

In this episode, Vijay Pande speaks with Jakob Uszkoreit, the cofounder and CEO of Inceptive. Together, they discuss all things AI.

We’re publishing the transcript in full below, in case you want to read along.

***

Olivia Webb: Hello, and welcome to Bio Eats World, a podcast at the intersection of bio, healthcare, and tech. I’m Olivia Webb, the editorial lead for Bio + Health at CFI . In this episode, we talked with Jakob Uszkoreit, formerly of Google Brain, and the cofounder of Inceptive. Jakob is also one of the authors on the seminal AI research paper Attention is All You Need, which we’ll link in the show notes. Jakob sat down with Vijay Pande, founding partner of CFI Bio + Health to talk about all things AI: from his time at Google Brain, to how humans and computers process language, to Inceptive’s belief in the promise of RNA, and how Jakob believes we’re entering inflection point territory with AI.

It’s an episode you don’t want to miss—but it’s also a graduate level discussion on AI, so we’ll be publishing a transcript alongside the episode. Let’s get started.

Applicable algorithms

Vijay Pande: So Jakob, thank you so much for being on Bio Eats World. It’s great to have you.

Jakob Uszkoreit: Great to be here. Thank you for having me.

Vijay Pande: Especially since you have such a fascinating story as a computer scientist and entrepreneur and founder, I’d love for you to walk us through your career journey, starting wherever you want, but what got you to Google Brain is probably a nice place to start.

Jakob Uszkoreit: I remember to some extent really, uh, encountering this problem of machine learning, maybe in the broadest sense, [and] language understanding, somewhat more specifically, as an issue that runs in the family. So my dad is a computer scientist and computational linguist and, you know, growing up things like Turing machines weren’t necessarily entirely foreign concepts fairly early on.

Vijay Pande: Yeah, it sounds like it might have been dinner table conversation, in fact.

Jakob Uszkoreit: They were dinner table conversations. And so especially finite automata, and how they actually relate to vending machines, were, you know, common topics. The older I got, the more I wanted to ensure that I actually ended up doing something different. And so I ended up looking quite a bit into pure math and related areas there. [I] really focused quite a bit on optimization, on optimization algorithms, algorithms overall, more broadly complexity theory, before realizing that maybe that wasn’t the most practical thing and the most applicable thing, which, you know, kind of has become a bit of a red thread throughout my career. And then literally stumbling upon a Google internship back in 2005.

I was given a few different options [as to] what kind of research projects to join, [and] among them were different computer vision efforts, but also the machine translation project that basically became Google Translate. Right around that time, or just a little bit prior to that, [Translate] launched its first product that was really powered by Google internal systems that were developed and in a certain sense, much to my dismay, it turns out that Google Translate at the time by far had the most interesting large scale algorithms problems.

At the time, it was really interesting actually to see, because what convinced me to then abort my PhD and actually come back to Google after that internship, was really that it became evident in my time there that if you wanted to work on something in machine learning that was not only interesting and let’s say intellectually and scientifically, exciting, challenging, and stimulating, but that also had really high hopes of moving the needle right away in industry and in products. There really, around that time, were not very many places in the world. And they certainly were not academic labs at the time, but very much places like Google. And Google there and then was actually very much at the forefront of this. And so, you know, at the time I thought it was amazing to run my first large-scale clustering algorithms on a thousand machines, and it was just, absolutely impossible to do so elsewhere.

Vijay Pande: When you talk to our senior colleagues, there’s much romanticism of the Bell Labs heyday, and I’ve always wondered whether Google Brain may be one of the closer variants today. What was the environment like?

Jakob Uszkoreit: So I feel actually between that time and when Google Brain really got started, which is about five years later, there was a significant shift. Before Brain and Translate got started, it was much more driven by products that truly made a difference than I believe Bell Labs was. And we had a good number of Bell Labs alumni, of course, among us, but it was much more motivated by direct applicability.

Which to me was actually really amazing to witness, how machine translation turned [from something that] was good for laughs at a party, quite literally. If they asked you, where do you work? And you said, Google. And then they said, what do you do there? And they were impressed at first. And then you said, oh, I work on Google Translate. And then they laughed and asked, will this ever work? I don’t think so. But then at the same time, I would say that wave of machine learning, the pre-deep learning renaissance wave of machine learning, started to plateau. You know, deep learning was something I’d done previously at school, and I liked it, but it was not something that you could really apply in those days.

Vijay Pande: Yeah, especially because you didn’t have the scale in academia to do the calculations you’d need to do.

Jakob Uszkoreit: Certainly not in academia, but even at Google. Even though at the time, in Translate, actually, the most interesting distinguishing feature was, I would say, we really believed in the absolute power of data at the end of the day.

So we were trying not to make more complicated, more sophisticated algorithms, but instead simplify and scale them as much as possible and then enable them to train on more and more data. But we just hit a ceiling there. The simplifications you had to make in order to scale them to what was at the time Google’s scale, that was really our aim. But then, and that was kind of one of these pendulum movements, swinging back, out of academia, a bunch of folks with a bunch of GPUs—deep learning came back in a certain sense with a vengeance. And suddenly the environment adapted, because it was unclear what the direct path would be at scale into production.

And so the entire environment shifted from being more application and product oriented, into something that at least felt for quite a few years, much more academic. It’s still a little different than academic labs because we could afford way more GPUs, but much more in line, in a certain sense, with this idea of, [being] driven by publications, driven by leaps rather than steps. [It] turned into a very, very productive—and really amazing—but much more open-ended [environment].

Attention is all you need

Vijay Pande: Well, you know, speaking of publications, a natural place to think about is when you and the team published Attention is All You Need. And, you know, that’s been such a seminal paper for so much of generative AI since that was when the transformer algorithm was first laid out.

Jakob Uszkoreit: Two years prior to publishing that paper, we realized [that] what was then state-of-the-art for problems like machine translation, or [what] was emerging as state-of-the-art, namely LSTM or RNN-based, Seq2Seq overall as a training paradigm and as a setup, but also as a network architecture—had incredible issues even on the most modern GPUs at the time, when it came to scaling in terms of data.

For example, the very first neural machine translation system that Google launched, GNMT, was actually, to my knowledge, never really trained on all training data that we had available, that we had previously mined for the phrase-based statistical systems. And that was because the algorithms just didn’t scale well in terms of the amount of data. So, long story short, we were looking, at the time, not at machine translation, but at problems where, internally at Google, we had even larger amounts of training data available. So these were problems that came out of search, where you have basically another three or four orders of magnitude. You know, there’s now not billions of words anymore, but trillions easily, and suddenly we encountered this pattern where simple feedforward networks, even though they made ridiculous simplifying assumptions such as, it’s just a bag of words, or it’s just a bag of bigrams, and you kind of average them and you send them through a big MNLP, they actually outperformed RNNs and LSTMs, at least when trained on more data.

[And they were] n-times faster, easily 10, 20 times faster, to train. And so you could train them on way more data. In some cases, [they were] a hundred times faster to train. And so we kept consistently actually ending up with models that were simpler and that couldn’t express or capture certain phenomena that we know are definitely common in language.
And yet, you know, bottom line, they were cheaper to train and [they] performed better.

Vijay Pande: Let’s just give an example for people who aren’t familiar. So, for a bag of words, if I said, show me all the restaurants nearby except for Italian, it’ll show you all the Italian restaurants, right?

Jakob Uszkoreit: Exactly. In fact, what you said can probably be reordered, to show me all Italian restaurants except nearby. It’s just a soup of words and you can reorder it into something that definitely means something different.

Vijay Pande: Yes.

Jakob Uszkoreit: And then you approximate getting at the structure and getting at the more global phenomena by putting in bigrams. So basically groups of two consecutive words and things like that. But it’s clear that, certainly in languages like German, where you can basically put the verb into the very end of a sentence…

Vijay Pande: And it changes the whole meaning, right?

Jakob Uszkoreit: Changes all meaning, exactly, yes. No matter what the size of your n-grams—or your little word groups—are, you will ultimately not succeed. And it became clear to us that there has to be a different way that doesn’t require the RNN’s recurrence in length, or recurrence in sequence of, say words or pixels, but that actually processes inputs and outputs in a more parallel way and really ultimately cater[s] to the strengths of modern accelerator hardware.

Vijay Pande: Think about it, like a bag of words is words in random order. LSTM, or long short-term memory, maybe gives you some sort of [ability to] look [into the] past a bit, right? But transformers does something radically different. How does transformers take that to the next level?

Jakob Uszkoreit: There are always two ways of looking at this. One is through the lens of efficiency, but the other way that’s maybe a bit more intuitive is to look at it in terms of, you know, how much context can you maintain. And like you said, LSTMs, or recurrent neural networks in general, they move through their inputs step-by-step, broadly speaking, and while they, in theory, are able to maintain arbitrarily long context windows into inputs—the past—what happens in practice is that it’s actually very difficult for them to identify events, say words or pixels, that are very distant in the past that really affect the meaning at the end of the day. They tend to focus on things that are in the vicinity.

The transformer, on the other hand, basically just turns that on its head and says, no, at every step what we’re doing is not moving through the input. At every step, we’re looking at the entirety of the input or output, and we’re basically incrementally revising representations of every word or every pixel or every patch or every frame of a video, as we basically move, not in input space, but in representation space.

Vijay Pande: Yes.

Jakob Uszkoreit: And that idea had some drawbacks in terms of how you would fit it onto modern hardware, but compared to recurrent neural networks, it primarily had advantages because now you were not actually bound to sequentially compute representations, say, word for word. What you were bound by is, really, how good should they be? How many layers of this kind of parallel processing of all positions where everything, where all pairs of words or all pairs of image patches can interact right away? How many revisions of these representations can I actually “afford”?

Vijay Pande: What’s really interesting too is that obviously the inspiration is natural language, but that there are many structures that you’d want to input where you don’t want to just study it sequentially, like a DNA sequence—and we’ll get into biology soon enough—that you want to have a model of the whole thing.

It’s kind of funny with language. When I’m speaking or when I’m listening to you, I am processing each word, but eventually I have to not just tokenize the words into individual meanings, but I have to sort of develop this representation. Yes? I wish we could do it the way transformers do. And maybe that’s the trick is that LSTMs are closer to the way we humans do it, and transformers are maybe just the way we should do it, or I wish we could do it.

Jakob Uszkoreit: Superficially, I think that is true, although at the end of the day—introspective arguments like these are subtle and tricky.

So I guess many of us know this phenomenon where you are shouting or yelling with someone trying to communicate something across a busy street. And so you hear something they say, and it’s not a short sequence of words, and you basically didn’t understand anything. But then like a half a second later, you suddenly understood the entire sentence. It actually hints at the fact that while we are forced to write and utter language in a sequential manner—just because of the arrow of time—it is not so clear that our deeper understanding really runs in that sequential manner.

Building a team

Vijay Pande: If anyone studies even just the Attention is All You Need paper or how a transformer works, there’s a lot of parts to it. And it seems like it’s probably now gone past the point where one person could effectively do that work by themselves in any short period of time.

Jakob Uszkoreit: Absolutely.

Vijay Pande: So now you really need a team of people to do these types of things. What’s the sociology of that? How does something like that come about?

Jakob Uszkoreit: This particular case, I personally feel, is a really wonderful example of something that fits a more, let’s say, industrial approach to scientific research, exceptionally well. Because you’re exactly right. This wasn’t the one big spark of imagination and of creativity that sets it all off.

It was really a whole bunch of contributions that were all necessary, ultimately. Having an environment, a library—which later also was open sourced, by the name of Tensor2Tensor—that actually included implementations. And not just any implementations, but exceptionally good implementations, fast implementations of all sorts of deep learning tricks.
But then also all the way to these attention mechanisms that came out of previous publications—like the decomposable attention model [that was] published before—but then were actually combined with improvements and innovations, inventions around optimizers. You won’t find people, I think, who truly are among the world’s leading experts in all of these simultaneously and who are really also similarly passionate about all of these aspects.

Vijay Pande: And especially there’s the initial idea, there’s the implementation of it, there’s the scaling of it. To reach that type of scale anywhere else other than in a large company, right now, is probably not feasibly done just because of the cost.

Jakob Uszkoreit: I would think actually maybe the large company aspect is not quite that crucial.

Vijay Pande: Yeah?

Jakob Uszkoreit: The company aspect is one that I would value higher. The large company certainly doesn’t hurt if you need thousands and thousands of TPUs or GPUs or what have you. Deep pockets never hurt for this kind of stuff. But at the same time, I believe the incentive structure around this kind of explorative research in industry is just much better suited for these kinds of projects. And I think that’s actually something we’re seeing, looking at generative AI projects across the board.

Vijay Pande: Yeah. And to your point, it could be a startup.

Jakob Uszkoreit: It could definitely be a startup. And I think we are seeing now that using accelerator hardware is becoming at least more affordable. And there are startups that are very much competing when it comes to generative AI targeted at image generation or text generation.

Jumping to life sciences

Vijay Pande: I’d love to transition into what you’re doing now. You’re the CEO of Inceptive, a company that applies AI to RNA biology for RNA therapeutics. How did you transition into the life sciences? Superficially, talking about language models around the dinner [table] and then around the Google cafeteria…it seems like that might be a jump to the next generation of therapeutics. How did that all come about?

Jakob Uszkoreit: I couldn’t agree more. It’s an amazing learning experience, from my end. For quite a while now, biology struck me as such a problem where it doesn’t seem inconceivable that there are bounds to how far we can go in terms of, say, drug development and direct design with traditional biology as the backbone of how we go about designing—or discovering methods to design—the drugs of the future.

It seems that deep learning, in particular, at-scale is, for a bunch of reasons, potentially a really apt tool here. And one of those reasons actually is something that is often not necessarily billed as an advantage, which is the fact that it’s this big black box that you can just throw at something. And it’s not true that you can just throw it. It’s something you do have to know how to throw it.

Vijay Pande: And it’s not exactly black either. We can argue about that later.

Jakob Uszkoreit: Yes, exactly. Exactly. But, at the end of the day, coming back to the analogy to language, we’ve never managed to fully, in that sense, understand and conceptualize language to the extent that you could claim, oh, I will now go and tell you this theory behind language, and then afterwards you will be able to implement an algorithm that “understands” it. We’ve never gotten to that point. Instead, we had to abort and go take a step back and, in my opinion, to some extent, admit to ourselves that that might not have been the most pragmatic approach. Instead, we should try approaches that don’t require that level of conceptual understanding. And I think the same might be true for parts of biology.

Using AI to take bio farther

Vijay Pande: It’s interesting, we’ve talked about things like this before. You think about the last century, [which was] very much the century of physics and calculus. There’s a certain mentality there where there’s a way you can have a very elegant simplification of things that you can have a single equation like Einstein’s field equations that describes so much, and that’s a very simple equation in a very complex language. You’ve talked about how that Feynman approach, almost like the sociology of physics, may not apply here in biology, right?

Jakob Uszkoreit: It may not apply, at least for two reasons I can see at this point. Number one is there are too many players involved. And while it’s true that maybe we can just reduce it all to Schrodinger’s equation and just solve it, it just so happens to be, not only intractable computationally, but also we would have to know about all these different players, and we currently do not. Not even close. So that’s one aspect.

And then the second one is basically the intractability computationally, where the reduction, in a certain sense, has gone so far that, while it brings it all back to one single thing, it doesn’t help us because our computational approaches to basically use those fundamentals in order to make predictions are just too slow to make those predictions for systems large enough to really matter to life.

Vijay Pande: Yeah. So it’s not an n-body equation, but yet there’s still a sense of formalism—maybe it’s a more data-driven formalism or more Bayesian formalism. How does that feed into what you would want to do? How does that feed into applying AI and other types of new algorithms?

Jakob Uszkoreit: I think there are a couple of different aspects. At the end of the day, one of the big takeaways in my opinion from what we’re currently seeing in generative AI is that we don’t anymore have to train on data that is not only perfectly clean, but also precisely from the domain and from the kinds of tasks that you would later like to tackle. But instead it might actually be more beneficial or even the only way that we’ve so far found to actually try to train on everything you find that’s even remotely related. And then use the information effectively gleaned from those data in order to end up with so-called foundation models, that you can then fine tune to all sorts of specific tasks using much smaller, much more tractable amounts of cleaner data.

I think we slightly underestimate what we have to know about the phenomena at large. In order to build a very good large language model, you have to understand that there is this thing called the internet and has a lot of text in it. You have to understand quite a bit, actually, about how to find this text, what isn’t text, and so forth, in order to then basically distill from it the training data that you then use.

I believe there will be very directly analogous challenges around biology. The big question is: what are experiments that we can scale such that we can observe life at sufficient scale with just about enough fidelity—but much less specificity while keeping in mind the problems that you’re trying to solve eventually—such that we can basically take from that the data that we need in order to start building these foundation models, that we can then use, fine-tuned and specifically engineered, to really approach the problems that we want to tackle.

The data generation part is certainly one of them. Architectures and effectively having models and network architectures that mimic what we do know, about, say, the physics underneath, will still remain an incredibly powerful way of actually saving computation and also reducing the still enormous appetite for data that these models will have to have, to a feasible level. And so one thing that I believe is actually interesting to note is that a lot of the current applications of models, say transformers, that have [been] found to scale pretty well in other modalities, other domains, language, vision, image generation, etc., etc., and applying them to biology basically ignores the fact that we know that there is such a thing as time, and that the laws of physics, at least to the best of our knowledge, don’t seem to just change over time.

The process of a protein folding, ignoring the fact that there’s tons and tons of players—chaperones and whatnot—is actually, in a certain sense, a fairly arbitrarily separated problem from the remainder of protein kinetics. It’s just as much kinetics as the remainder of the kinetics, or the remainder of the life of that protein, of that molecule. And so why do we try to train models specifically for one and, potentially at least, ignore data that we might have about the other? In this case, maybe more specifically, are some of the protein structure prediction models that we have today, do they already learn something about kinetics implicitly because of the fact that they slowly start to embrace, you know, the existence of time?

Developing new architectures

Vijay Pande: One of the interesting things I think about where you stand right now is that, with a few rare exceptions, most of deep neural networks or other types of AI in biology feel like it’s taking something invented somewhere else and carrying it over. Like we’ll use convolutional neural nets for images. Maybe for small molecules…in my lab at Stanford, we used graph neural networks and several convolutional neural networks. But to really develop an algorithm explicitly for the biological problem is pretty rare. And I’ve always assumed it was because it’s just hard to have the skillsets of a team strong in the biology domain and in the computer science domain. But I’m curious to get your take. Or is it just rare to develop new architectures in the first place?

Jakob Uszkoreit: Well, I think, at the end of the day, what we’re seeing is that the new architectures, while motivated by specific problems, if they truly make a difference, then they tend to also be applicable elsewhere. That doesn’t, on the other hand, mean that, on the way there, choosing carefully what the motivating applications and domains are wouldn’t make a huge difference. And I think it certainly does.

I feel one of the key challenges here is really that we’re not yet in a regime in biology where we have droves and droves of data, even though, compared to what we used to have a while ago, it’s amazing. But we’re not in that regime yet where that’s just sitting around on the equivalent of the web, and we can filter it a little bit, download it, and be done with it. But instead, I think we have to create it to a reasonably large extent. And that will not be done by deep learning experts, at least not by most of them.

And I believe that has to happen in lockstep with then also really understanding the peculiarities of said data, right? The kinds of noise that you encounter there. The fact that these are actually created in very large scaled pools, high throughput experiments, but still, experiments that are run on different days by different experimenters and so on and so forth. And where the folks with more of a deep learning background work closely enough with folks with biology background, learn enough about what we know about the underlying phenomena, [they will] basically be inspired to try interesting new approaches.

Vijay Pande: Well, I loved when you talked about just the example of the Attention is All You Need paper, about how you wanted to get this diverse group of people whose passions were, you know, fairly orthogonal from each other. And in a sense, when you’re doing this in biology and especially for what you’re doing at Inceptive, you also have to put all this work into generating the data. And generating the data really means, to be very explicit, running biological experiments at scale. The input part itself is very expensive and very technical, and as you said, has so many ways of going wrong. But it sounds like you’re building upon the culture that you’ve done before and now it’s just more experts with different passions coordinating in an analogous way.

Jakob Uszkoreit: I really need, [and] people need that. This is, as far as I can tell, the most promising avenue. [It is to] not aim for, in a certain sense, a pipeline model, where certain data in the lab in which they were created, given the best of our knowledge, about the underlying aspects of life. And then starting to run existing deep learning approaches on it and then tweak them. But instead really to actually have folks who, in a certain sense, they might be among the first people who are really working in a discipline that currently doesn’t really have a great name yet.

Maybe the least common denominator is curiosity that extends beyond what you know, what you’ve learned before and what you’ve maybe spent most of your time doing. We find that just like in very many other areas, what we are really after is a set of people with very diverse backgrounds, but who share curiosity.

Where is AI going?

Vijay Pande: Where do you think AI is right now for those harder problems, for drug design, healthcare, and so on? What has to be done? When will it get there?

Jakob Uszkoreit: I would expect—and it’s always very dangerous to make predictions about the future—I would be very surprised if within the next three years we wouldn’t actually start to see an [inflection] point happening when it comes to the real world effects of machine learning, large scale deep learning in drug development, drug design. Where exactly they’ll be first, of course, I believe that a lot of them will happen around RNA, RNA therapeutics and vaccines. That will certainly not be the only area affected by this, but I definitely think we’re headed into the inflection point territory.

Vijay Pande: You made an interesting point. What is different about RNA? Because I think it’s particularly interesting, not just that you went from Google Brain into biology, but you went into RNA specifically. What attracts you to RNA, especially maybe from an AI or ML point of view?

Jakob Uszkoreit: One thing that’s interesting about RNA is the combination between, as we have seen, very broad applicability—although it’s still narrow in the sense of a single indication—but just looking at this wave of approval processes that is starting and has started, it’s pretty clear that the applicability is very, very broad, coupled with—this is a bit ambiguous—a structurally simple problem. And it’s structurally simple not in the sentence that RNA structural prediction is simple, but it’s structurally simple in the sense that it’s a biopolymer with four different bases. We’re not talking about over 20 amino acids. It’s something that can be produced fairly effectively.

There’s some challenges there, but synthesis is something that can scale and is scaling rapidly, and these things come together really to enable this rapid feedback loop that I guess is often alluded to, but very rarely, at least from what I know, actually implemented and implementable at the end of the day.

Vijay Pande: Yeah, arguably probably it’s a more rapid feedback loop, especially for the way you go after it.

Jakob Uszkoreit: Yes. And given that I believe we need to create the lion’s share of data for training the models that we’re training, we are really investing Inceptive into creating such data at scale. And I would say comparatively fairly massive scale, given that RNA seems to be by far the best combination when it comes to the structural simplicity, but also the scalability of synthesis and this experimentation. There’s huge potential here that so far has been untapped.

Vijay Pande: Yeah, and I think especially potentially the ability to have these rapid cycles, both sort of preclinical and therefore getting to the clinic faster and being in the clinic [for a shorter period of time].

Jakob Uszkoreit: Absolutely. That’s really what we’re hoping for. We’re also seeing maybe early hints indicating that that might be the case and that we’re of course, really, really excited about.

Vijay Pande: Thinking about the last 10 years has been amazing, you know, 2012 to now. What do you think the next 10 years looks like? Where do you think we are 10 years from now with AI? Either broadly or especially for bio?

Jakob Uszkoreit: I think if it’s really true that we are entering this inflection point territory, when we look back 10 years from now, it’ll seem like a revolution at least as large and as expansive as the one that we think we’ve seen in the last 10 years. At the very least. Now I think there will be a crucial difference, and that is that it’s not so clear exactly how broadly the revolution that we have been witnessing in the last 10 years affects everybody’s lives. There are certain areas, search engines or assisted writing, etc., where it’s evident, but it’s not clear how broadly applicable this revolution is. I believe it is very much so, but we don’t see it yet. I think the revolution that we’re going to see specifically around bio over the next 10 years, or that we’re going to be looking back at 10 years from now, will really differ in terms of its profound impact on all of our lives.

Even just letting aside drug design and discovery applications, there is such amazing applications in and around scientific discovery where you could now imagine that, with a web interface, you can basically have molecules designed that in certain organisms are with a very high likelihood going to answer certain questions, producing more reliable readouts than, you know, what you previously could get at. So even just leaving out the entire kind of complexity of how this will affect, ultimately, patients and everyone, it is pretty clear, I think, that these tools will just rapidly accelerate fields like biology.

Vijay Pande: That seems like a great place to end it. Thank you so much, Jakob, for joining Bio Eats World.

Jakob Uszkoreit: Thank you so much for having me.

Olivia Webb: Thank you for joining Bio Eats World. Bio Eats World is hosted and produced by me, Olivia Webb, with the help of the Bio + Health team at CFI and edited by Phil Hegseth. Bio Eats World is part of the CFI podcast network.

If you have questions about the episode or want to suggest topics for a future episode, please email bioeatsworld@a16z.com. Last but not least, if you’re enjoying Bio Eats World, please leave us a rating and review wherever you listen to podcasts.

Please note that the content here for informational purposes only, should not be taken as legal, business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any CFI fund. For more details, please see CFI .com/disclosures.

***

 

More About This Podcast

Biology and the state of healthcare are undergoing radical shifts.
Raising Health delves into dialogues with scientists, technologists, founders, builders, leaders, and visionaries as they explore how AI, engineering, and technology elevate health to new heights and create a system of enduring health for all.

Learn More