CFI dcast

Brains, Bodies, Minds … and Techno-Religions

Yuval Harari, Kyle Russell, and Sonal Chokshi

Posted February 23, 2017

Evolution and technology have allowed our human species to manipulate the physical environment around us — reshaping fields into cities, redirecting rivers to irrigate farms, domesticating wild animals into captive food sources, conquering disease. But now, we’re turning that “innovative gaze” inwards: which means the main products of the 21st century will be bodies, brains, and minds. Or so argues Yuval Harari, author of the bestselling book Sapiens: A Brief History of Mankind and of the new book Homo Deus: A Brief History of Tomorrow, in this episode of the CFI Podcast.

What happens when our body parts no longer have to be physically co-located? When Big Brother — whether government or corporation — not only knows everything about us, but can make better decisions for us than we could for ourselves? That’s ridiculous, you say. Sure… until you stop to think about how such decisions already, actually happen. Or realize that an AI-based doctor and teacher will have way more information than their human counterparts because of what can be captured, through biometric sensors, from inside (not just observed outside) us.

So what happens then when illusions collide with reality? As it is, religion itself is “a virtual reality game that provides people with meaning by imposing imaginary rules on an objective reality”. Is Data-ism the new religion? From education, automation, war, energy, and jobs to universal basic income, inequality, human longevity, and climate change, Harari (with CFI ’s Sonal Chokshi and Kyle Russell) reflect on what’s possible, probable, pressing — and is mere decades, not centuries, away — when man becomes god… or merges with machines.

Show Notes

How humanity is focused on changing itself rather than the external world [0:42]

The illusion of the self and the problem of tribalism [6:32]

How collecting personal data could lead to hyper-personalization [12:16], and even “religions” based on technology [20:53]

The future of work and UBI [25:44], and what humans will be good at in the future [32:17]

Political questions [36:27] and a look to the future [39:51]

Transcript

Sonal: Hi, everyone. Welcome to the “CFI dcast.” I am Sonal, and we’re very honored today to have as our special guest Yuval Harari, who teaches at the Department of History in the University of Jerusalem and specializes in macrohistory and the relationship between history and biology. He’s the author of “Sapiens,” which is a mindbogglingly good book, and now has a new book just out, “Homo Deus.” Did I pronounce that properly?

Yuval: I use the Latin pronunciation, which is homo de-oos.

Sonal: De-oos. Okay.

Yuval: But you can say homo dee-uhs.

Kyle: I say the really bad, like, non-accent dey-uhs.

Yuval: Dey-uhs is great. Yeah.

Inward human evolution

Sonal: That, by the way, was Kyle’s voice, who is also joining us on this podcast. He’s on the deal and investing team and covers a lot of the technology like drones, AI, and a bunch of other stuff. So just to get things started, we talk a lot about innovation and technology, and I’ve always wondered what’s the simplest definition of technology and innovation. And reading your book, “Sapiens” in particular and then “Homo Deus,” the thing that really struck me is that technology is the greatest accelerator humankind — in fact, the evolution of all the species on earth — has ever seen, because it allowed us to essentially bypass evolutionary adaptations where we could become seafarers without having to grow gills like a fish, for example. And so that is an incredibly powerful idea, but that’s non-directional. Given that your new book and your work, essentially, the first phase was talking about organic history of our species, and your new book is shifting to a more inorganic version, I’d like to hear what drove that shift.

Yuval: Well, I think that so far, for thousands of years, humans have been focusing on changing the world outside us, and now we are shifting our focus to changing the world inside us. We have learned how to control forests, and rivers, and other animals, and whatever, but we had very little control over what’s happening inside us, over the body, over the brain, over the mind. We could stop the course of a river, but we could not stop the body from getting old. If a mosquito annoyed us, we could kill the mosquito. But if a thought annoys us, we don’t know what to do about it. Now, we are turning our innovative gaze inwards. I think the main product of the 21st century will be bodies, and brains, and minds. We are learning how to produce them. And as part of that, we may also for the time — not only in history. For the first time in evolution, the evolution of life, we may learn how to produce non-organic life forms.

Sonal: That’s amazing.

Yuval: So after four billion years of evolution of organic life forms, we are really on the verge of creating the first inorganic life forms. And if this happens, it’s the greatest revolution in the history of life since the very beginning of life.

Sonal: What do you mean by inorganic life forms? Because in your book, you draw a distinction between biological cyborg and nonorganic. Are we just gonna be, like, living in a network? Is that our identity, then? Is that who we are? Like, what do you see?

Yuval: It could be something that exists only in cyberspace. I mean, you hear a lot of talk about uploading consciousness into computers, or creating consciousness in computers. It could be life forms in the outside world, but which are not based on organic compounds. It can go any of these ways, but the essential thing is, it’s no longer limited by organic biochemistry.

Sonal: Evolutionary psychologists, biologists talk a lot about our hands and the formation of our hands as tools. One thing that’s happened to me, anecdotally, is as I use my mobile phone more and more, my hand muscles have literally atrophied to some extent. I know this because I started taking notes again instead of on my phone to be polite in meetings, and my handwriting is literally — I used to win awards for handwriting, and now it’s like chicken scratch.

Yuval: But it’s much more extreme, because for four billion years, all parts of an organism had to be literally in the same place for the organism to function.

Sonal: Oh, right. Like, physically — like, in a single entity.

Yuval: Physically connected. I mean, if you have an elephant, the legs of the elephant must be connected to the body of the elephant. If you detach the legs from the elephant, it dies or it can’t walk. Now, as inorganic life, there is absolutely no reason why all parts of the life form must be at the same place at the same time.

Sonal: That’s mind-blowing.

Yuval: It can be dispersed over space. This is something that for four billion years was unthinkable, and it’s just around the corner.

Sonal: We’re essentially already uploading ourselves into the cloud, online social networks, in the World Wide Web. That’s actually replacing writing as a major artifact. That’s our new collective history. One of the consequences of that is it changes the dynamics of what becomes real and not real, and it reminds me of this famous story from Ray Bradbury called “The Veldt,” which basically is this story where there’s a virtual world that these two kids sort of enter, and they end up killing. And you ask a similar question in the book. You give the anecdote of Jorge Borges’ short story “A Problem,” and the story of Don Quixote. It sort of is this blending of delusion and reality.

Yuval: The question is what happens when our illusions collide with reality. And with humans and human history, you see more and more that our fictions and illusions are more powerful, becoming more and more powerful.

Sonal: <inaudible> say fake news. This is a big debate that’s playing out right now in the United States.

Yuval: Well, you know, it’s fake news when we — with all this idea of the age of post-truth, I would like to know, when was the age of truth?

Sonal: That’s my question. I totally agree with you.

Yuval: Was it the 1980s? Was it the 1930s, the 19th century?

Sonal: It never existed, right?

Yuval: I mean, as far back in history as you go, what kept humans together in society is belief in shared illusions and shared fictions.

Sonal: Imagined realities or imagined orders.

Yuval: Yes, imagined realities, like when you swear the U.S. President to office, he swears on the copy of the Bible. And even when people testify in court, “I swear to tell the truth, the whole truth, and nothing but the truth,” they swear on the Bible, which is so full of fictions, and myth, and error. It’s like you can swear on Harry Potter just the same.

Sonal: Some people do.

Yuval: Some people do, that’s true. When, for thousands of years, human societies have been built on shared fictions and shared illusions, and there is nothing new about that, it’s just that with technology, actually, our fictions and illusions become more powerful than ever before.

Sonal: Invisible to, I think, one another.

The illusion of the self and tribalism

Kyle: One of the illusions that you talk about being broken down by the advancements in science and technology is the illusion that we’re all individuals. Free markets and capitalism is the idea that there’s, like, a bunch of products that appeal to you as an individual, and they try to put those individuals into buckets and market towards them. And, actually, it turns out that scientific breakthroughs show that, actually, there isn’t just this, kind of, one individual you that accumulates through all of your experiences. Your brain is just kind of spitting out a lot of things. Maybe it’s deterministic, maybe it’s random, maybe it’s probabilistic, but you don’t necessarily have control over that. And so if you don’t have control over the desires — that your brain is spitting out the random thoughts, how much of any of that is actually you? And so, what are the implications of that?

Yuval: I think what we are seeing is the potential breakup of the self, of the individual. The very word individual means, literally, something that cannot be divided.

Sonal: Indivisible.

Yuval: Indivisible. And it goes back to the idea that, yes, I have all kinds of external influences, and my neighbors, and my parents, and so forth. But deep down, there is a single indivisible self which is my authentic identity. And the way to make decisions in life is, just forget about all these external disturbances and try to listen to yourself. Try to connect to yourself. And the idea is, you just need to do whatever this inner voice tells you to do. But science now tells us that when you look inside, you don’t find any single authentic self. You find a cacophony of different conflicting voices, none of which is your true self. There is just no such thing. And even in the 20th century, the big fear for individualism was that the individual will be crushed from outside. Now, the threat comes from the opposite direction. The individual will break up from inside, and then the entire structure of individualism, and democracy, and the free market — it all collapses with the individual. It all collapses with the self.

Sonal: Or, just one alternative possibility, because this is actually what struck me most when reading “Sapiens,” and then reading “Homo Deus” afterward — is that the big theme of “Sapiens” was this great unification of humankind, and being able to collect people into empires, nation-states, outside of these, sort of, hunter-gatherer tribes. And now when I look at what’s happening because of this mass coordination online, you’re now seeing this return to tribalism in some ways, I would argue.

Kyle: Well, that’s, like, what the value of shared illusions are, whether it’s religion, or the idea that we’ve got this free market system but some safety net to keep it all functioning and keeping anyone from being exploited. The point of having that shared ideology or that shared illusion is, you get to pretend that we all care about the same thing, that we’re all coordinated towards the same goals.

Sonal: Right. Now, though, because of the internet, you can actually identify what the same thing is at a very micro-targeted niche level in a way that was unprecedented. No longer where you were born, to your point, physically located. It could be now — your political beliefs. It could be your belief about, you know, if you’re a fan of Harry Potter. Are you a Slytherin or a Gryffindor? Like, it could be any of those things, and people collect into new tribes. And I find this fascinating because you do see sort of this return to the past, not in a pastoral way, but you’re seeing this coming full circle.

Like, you know, the Industrial Revolution created adolescence. Are we gonna go back to a world where you don’t need adolescence again? You needed banking credit. Are we gonna go back to a world where, because of online algorithms and new information sources, you don’t need that version of a credit score. You can go back to this trusted personal manager who essentially knows what he needs to know in order to invest in you as a risk. So I always wonder in this context if this is another thing to think about, not just at an individual level, but sort of a return to tribalism, especially lately.

Yuval: The present stage of a new nationalism or tribalism — I think it’s just a phase. It’s a backlash against globalization. And the main problem — it doesn’t have any solutions to the deep problems of the 21st century. All the major problems of humankind in the 21st century are global in nature. It’s climate change and global warming, it’s global inequality, and, above all, it’s technological disruption. The implication of the rise of AI and bioengineering and so forth — you cannot solve any of these problems on the national level. The nation is simply the wrong framework for that. And, therefore, I don’t think that nationalism really has relevant answers to the problems we now face.

Sonal: I agree with you.

Yuval: So I don’t think that nationalism is our future. I think looking further to the future, what we will see with regard to the individual is that, at a certain point, external entities, whether it’s corporations or whether it’s governments — they will have enough data, especially biometric data, and enough computing power to be able to understand me better than I understand myself. Very soon, Facebook or the Chinese government will be able to do that. And once you have an external entity, an algorithm, that knows me better than I know myself, this is the real turning point. This is the point when individualism, as we’ve known it, doesn’t make any sense — when democracy and the free market become completely obsolete. And we need fundamentally different models for running the world and for understanding what’s happening.

Biometric data and personalization

Kyle: Right. For now, several hundred years, the market as a mechanism for saying what our opinions or our desires really are, has been the most efficient mechanism. We could best allocate production towards things that people find valuable because they’re voting with their dollars. But if you can accurately say, based on this person’s heart rate, what they’re paying attention to, how they react to particular inputs, you know, whether it’s an advertisement, or some new way of interacting with things based on new technologies like VR — you could know, like, the closest thing to the underlying motivation, desire — even better than the person themselves maybe would. But at the other side of it, there’s an example you give —  and this goes back to the topic of, like, free will and individualism — lab rats that have electrodes hooked up to the reward centers of their brain. Where you have them navigate a maze, or climb ladders, and go down little chutes by basically stimulating their reward center. And it basically influences that rat’s desire. It doesn’t feel like it’s being coerced into doing that activity. It’s like…

Yuval: Yeah, the rat doesn’t know.

Kyle: “Oh, wow. I’m really into the idea of climbing this ladder right now. This is awesome.”

Sonal: The rat race.

Kyle: So, what’s interesting is, markets, as efficient as they are, like — part of how they worked was this idea of marketing to instill desires. Car ads giving you this vision of being on the open road and free, and wind blowing in your hair, and then, at some point, the desire pops up at a time when you could act on it. You buy a car. Whereas the future state that you describe is, imagine you had a headset that was like a miniaturized fMRI that can detect exactly where the parts of your brain would need to be stimulated to make you really want to play the piano right now, so that you’ll be motivated intrinsically to learn it. You could basically sell the idea of being into this. And so, being able to read your desires — but also being able to shape your desires — what do you think the interaction of those two look like?

Yuval: We don’t know. I mean, the basic tendency is to think in 20th-century terms, that they’ll try to manipulate us. And this is certainly a danger but, intellectually, it’s the less interesting option — that, okay, they’ll use it to advertise in a different way, to shape our desires without even our knowing it, which they’ve been trying to do for decades. They’ll have better tools [for] shaping our desires. The deeper and more interesting question is, what if Big Brother can really do a better job than the individual in understanding what you want and what you need? Because many people discover during their life that they don’t really know what they want, and they often make terrible decisions in the most important decisions of their lives — what to study, where to work, whom to date, whom to marry. What happens if you have an external entity that makes these decisions for you better than you can?

It starts with very simple things, like choosing which book to buy. If the Amazon algorithm really picks books that you are very happy with, then you’ll gradually shift the authority to choose the books to Amazon. And this may happen with more and more fields in your life. And the really interesting question is not if they try to manipulate you. The really interesting question: what if it works?

Sonal: Oh, that’s such an interesting question.

Yuval: What does it mean to be a human being, when all the decisions in your life are taken by somebody else who really knows who you are? It’s like being a baby forever.

Sonal: It’s already working, on some level, because you might have a million other movies out there, but you really don’t care because you only care about what’s in the Netflix catalog because you’re looking for convenience of being able to binge-watch and get it on-demand in the moment. So, it’s already reshaping that cultural landscape. I mean, it’s already happening, [to] some extent.

Yuval: I think the big breakthrough will come with biometric data. So, for most of these algorithms, whether it’s Amazon, or Netflix, or whatever, they work mainly on the basis of data external to my body. They follow me around in space, see where I go, which places I visit. They see my likes and dislikes on Facebook, what do I buy, and so forth. But the real breakthrough will come when they start receiving more and more data from biometric sensors on or inside my body.

Sonal: Right, like quantified cells, wearables.

Yuval: Yeah. I read a book, and Amazon knows exactly what is the impact of every sentence I’m reading on my heartbeat, on my blood pressure, on my brain activity. This is really where you can see how an external system can make better decisions for you than you can make for yourself.

Kyle: Yeah, today, these systems are basically reflecting ourselves back at us. If you look at products — because of cookies, when you go elsewhere on the web it’s like, “Oh, I see that thing again.” Like, it’s just being reflected back at me. Same thing with your Netflix queue. I gave certain star ratings to certain things. It’s reflecting that same pattern back at me with recommendations.

Something that’s interesting to me is the idea of mapping concepts in a future space using deep learning, and then basically projecting it in different forms. And so, the idea of tracking what your eyes are looking at, what’s keeping your attention, what makes your heart rate get up, what makes your eyes dilate while you’re reading a book — you can imagine, as you’re reading it, being formatted and communicated in different ways, because they know this different way will reach you better and you’ll be more receptive to it. And so it might not necessarily be what feels coercive to us — a system of plugging an electrode into your brain and saying, “Now you’re gonna care about reading history.” It’s gonna say, “Here’s the optimal way to present history to this specific individual.”

Yuval: This is especially being explored in new educational methods. An AI teacher that studies you while it is teaching, and adapting to your particular strengths and to your particular weaknesses. Also, breaking down all the traditional limitations and barriers of modern education. Modern education takes place in school, and you have this division. There is school and there is real life outside school. And, also, in school — now, if you have — consider [if] you have a single AI mentor that follows you around everywhere…

Sonal: Your whole life.

Yuval: …24 hours a day, connected to biometric sensors on your body, and there is no longer any division between school and life. There is no history teacher and mathematics teacher. You have the same teacher for both. And you don’t have to be part of a group, like, the 30 other kids in the class.

Kyle: Basically, an AI assistant where it’s constantly in Socratic debate with you.

Yuval: Yes.

Kyle: Kids are inclined already to say, like, “Okay, but why? Okay, but how? Okay, but why?” And they keep digging kind of deeper until you as a parent or teacher are just like, “Because it is, okay?” Whereas an AI system, assuming it’s mapped out, like, the entire cannon of human philosophy and knowledge, could basically just keep going. Even if it doesn’t go all the way to that extent, you could have a huge increase in productivity of, you know, education, just by providing those kinds of tools to kids.

Sonal: Mass personalization. I mean, I come from the world of developmental psychology and education, and the Holy Grail has always been this idea of mass personalization, to be able to customize. But I want to make two points. One, I agree with this idea. Vygotsky had this idea as a constructivist way of learning. You’re constructing, you’re learning your world, and that’s how you learn these concepts in a very fundamental way. And it’s really ironic, because educators have been trying to fake that in the school setting for years — by Montessori methods and all these other — Reggio Emilia — because of this false artificial divide between real life and school. The flip side, however, and I don’t think we can ignore this, is that there is a social element to why school matters — a socialization component that has arguably nothing to do with education — and where there is shared learning and collaboration and the interaction of students. And so, I wonder what this means for that.

Yuval: You can have it outside school as well.

Sonal: You’re saying there’s no distinction between school anymore. It doesn’t matter.

Yuval: It doesn’t have to be limited — that all your friends are the same age as you. There is no reason why the group with which you socialize in school, everybody has to be the same age.

Sonal: Well, that actually is another way that technology brings you back to the past, because if you think of “Little House on the Prairie,” the schoolhouse was essentially all the grades in a single school because of physical location. But you’re arguing that those boundaries, the idea of a schoolhouse, essentially melts away.

Kyle: That feels like an inevitable transition anyway, whether it’s corporations or education. It’s this idea of, “take in this large set of inputs, crank out some modified set of outputs that fulfills some need.”

A new technology-based ethics

Sonal: Well, the question that I have for you guys, and especially given “Sapiens” and the theme of “Homo Deus,” is what do humans have to believe in order to make this reality continue happening? Do they not have any agency in any of this? Because it sounds like we’re almost talking about, you know, these uploaded brains in a vat. Is there any sense of coordination, consciously? Is there a new religion? I used to watch “Star Wars” as a kid. I remember thinking to myself, because I grew up Hindu — and you learn a lot about all these Hindu gods and goddesses. I remember thinking, this reminds me a lot of hearing about the Mahabharata and all these other things happening. Anyway, I would argue that science fiction is like religion for a lot of people, but what do people have to believe in this new world? What is their religion? Is there one? I mean, you make the argument about — data is a new religion, but that sounds, to me, more of something that’s there versus something that people are choosing, like, creating new myths and gods around actively.

Yuval: I think we are seeing, and we will see more and more, the rise of kind of techno religions. Religions based on technology that make all the old promises of traditional religions — they promise justice, and happiness, and even immortality in paradise. But here on Earth, with the help of technology, there already has been one very important techno religion in history, which is socialism.

Sonal: Oh, I never thought of that that way.

Yuval: Which came in the 19th century with the Industrial Revolution. And what Marx and Lenin basically said — “We will create paradise on Earth with the help of technology,” steam engines and electricity, and so forth. When Lenin was once asked to define communism in a single sentence, the answer he gave was, communism is power to the workers’ councils, plus electrification of the whole country. You cannot establish a communist regime without industrialization. It’s based on the technology of the Industrial Revolution — electricity and steam engines, and so forth. And the idea is, we’ll use this technology to create paradise on Earth. It didn’t really work very well. So, now, I think we will see the second wave of techno religions. Now we have genetics, and now we have big data, and, above all, we have algorithms. They’re our salvation. Paradise will come from the algorithms.

Kyle: You talk about, in the book, the idea that the more you commit or sacrifice on behalf of your ideology or religion, the more you buy into it — because you have this sunk cost. And so the idea of, like, sacrificing a goat or a cow to a god made you buy more into, because I can’t have spent the last eight seasons sacrificing goats and have it been for nothing. So, looking forward then, we’re hitting some kind of productivity cap as normal humans — that autonomous machines and systems are going to beat us, so we have to sacrifice our own humanity to increase our own productivity and augment ourselves. You can also see the emergence of some kind of powerful ideology. Like, the religion of the 21st century onward is, we are the gods?

Yuval: This is actually an old idea. Humanism, which goes back to the 18th century, even 17th century, is saying humans are the gods. Humans are the source of all meaning and authority. Everything you expected previously from the gods — to give legitimacy to political systems to make decisions in ethics. Humanism comes and says the highest source of authority in politics is the voter. The highest source of authority in economics is the customer. The highest source of authority in ethics is your own feelings. Humans are the gods.

Now we are entering a post-humanist era. Authority is shifting away from humans. If, in the last 300 years, we saw authority descending from the clouds to Earth to humans, now authority is shifting back to the clouds — but not to God, but to the Google cloud, to the Microsoft cloud. The basic idea of this — if you want [a] new religion or new ideology, is again — if given enough data and enough computing power, an algorithm can understand me better than I understand myself, and make decisions for me. In the end, religion is about authority. The basic question of religion: where does authority come from? And the answer of the 21st century: authority doesn’t come from humans, authority comes from data and from data processing. There is also an underlying new ontology. What is the world? What is reality? In the end, reality is just a flow of data. Physics, biology, economics — it’s all just a flow of data.

Sonal: It’s all a type of algorithm.

Kyle: We are just computers interpreting some fraction of reality.

The future of work and UBI

Sonal: They’re all algorithms. That’s the connective tissue of everything, from biology, to computers, to everything. I have a quick question for you here. What does this mean for the future of the firm — work? I would love to hear your thoughts on the universal basic income debate that’s playing out around the world right now, because that’s essentially people opting out of the rat race, in some arguments.

Yuval: I think we need new economic models in place. For the moment, when AI and robots, and so forth, may push more and more humans out of the job market. And we might see the creation of a new class of people who are not just unemployed but unemployable. At present, the best idea so far that people managed to come up with is universal basic income. The problem there, is that we don’t really know what universal means, and we don’t really know what basic means.

Sonal: Right, and where the income comes from, but that’s another sidebar.

Yuval: No, let’s say you tax and use the proceeds to give people universal basic income. Now, then the question is what is universal? Would we see the day when the U.S. government taxes the profits of Google in the U.S. and uses it to pay people in Bangladesh or Mexico who lost their jobs? So this is the first issue of universal, because now the economy is global, and a disruption of the economy, say, by the rise of AI will require a global solution. And most people who think about universal basic income, they think in national terms. Universal, for them, means U.S. citizens. The other problem is, what is basic? Basic human needs keep changing all the time. We are beyond the point when basic needs meant food and shelter.

Kyle: And the problem is that humans are biased towards looking at examples that are based on who you know. It’s hard to see, kind of, that level of UBI pulling it off. It feels like people’s expectations would be much higher depending on where they are and what life they’ve already lived.

Yuval: The basic problem is that people’s expectations keep changing. Usually, they grow. As conditions improve, expectations increase. And, therefore, what you see is that even though the conditions over the last centuries of most humans have improved quite dramatically, people are not becoming more satisfied, because their expectations also increase. And this is going to continue in the 21st century.

Sonal: Yeah, I have a question here, because in “Sapiens,” you said something that I thought was very profound when I read it, which is that the agricultural revolution was actually one of the greatest frauds ever perpetrated on ourselves. And so if you think about this shift, from Agricultural Revolution to Industrial Revolution to now, essentially, Information Revolution — what’s the fraud that we’re perpetrating on ourselves now? Where does meaning come from? Because I think the thing that people often forget to address when they talk about the universal basic income and, you know, future of work debate is this idea of meaning. And does that even matter at the individual level?

Kyle: Restless people tend to pick up the pitchforks.

Sonal: Right, exactly. Exactly, because it also goes to your points — and this is a universal theme that we have to address on some level — of further entrenching inequalities. That’s an important thing to think about.

Yuval: There are two different problems. I mean, first, you have inequality. And once more and more people no longer work, they depend on, say, universal basic income, then they have no way of closing the gaps. They depend on charity, on whatever the government is able or willing to give them, and you just don’t see any way in which they can close the gap.

Sonal: That’s if they’re dependent on it, because it can also be something that’s supplementary to something else you do.

Yuval: I’m thinking in terms of what happens if, again, AI pushes more and more humans out of the job market, so they rely on universal basic income. And it provides whatever it provides, but if they want more, they just have no way of getting more. So this, kind of, entrenches inequality. And if you add to that biotechnology and bioengineering, you get for the first time in history the potential of translating economic inequality into biological inequality.

Sonal: Yes.

Yuval: If you look back at history, let’s say, the Hindu caste system — people imagined that the Brahmins are superior, they are smarter, they are more creative, they are more ethical. But, at least as far as scientists today are concerned, this wasn’t true. It was all imagination.

Sonal: Right. It was not true at all.

Yuval: It was not true. It wasn’t true that the son of the Brahmin or the son of the king was biologically more capable, smarter, more creative, whatever, than the son or daughter of a simple peasant. However, in the 21st century, it might be possible for the first time to translate economic inequality into real biological inequality. And once this starts, it becomes almost impossible to close the gap. So this is one problem of a rise in inequality. Another problem is the question of meaning — that even if you can provide people with food, and shelter, and medicine, and so forth, how will they find meaning in life? For many people, their work, their jobs provide them with meaning. “I’m doing something important in life.”

Sonal: A mission. “I believe in this.”

Yuval: Yeah. So, one of the answers, some experts say, is that people will just play games most of the day. They’ll spend more and more time in virtual realities that will provide them with more meaning and more excitement and emotional engagement than anything in the real reality outside.

Kyle: Everyone just lives in their perfectly-optimized-for-them Holodeck.

Yuval: Exactly.

Sonal: Because you’re freed from the constraints of the physical realities.

Yuval: Yeah, and you get your meaning from the game, from the virtual reality game. And in a way, you can say, oh, this is nothing new. It’s been happening for thousands of years. It’s simply been called religion. I mean, religion is a virtual reality game that provides people with meaning by imposing imaginary rules on an objective reality. You play this game [where] you have to collect points. If I eat non-kosher food, I lose points. And if by the time I die, I gathered enough points, then I go up to the next level.

Sonal: I mean, in Hinduism, karma is essentially this great game of collecting and subtracting points across multiple lifetimes.

Yuval: Exactly.

Kyle: So, really quickly — this goes back to the automation, kind of, question and potential future. If you look back at, kind of, the Industrial Revolution, where humans as mechanical actors — just imbuing something with value by acting on it with their hands, or bodies with agriculture — that became less important as using animals, and then machines, were able to do that same task much more efficiently. Now, humans are valuable because they are knowledgeable operators of that machine. As part of the Industrial Revolution, the shift to services led to this idea that we’re not just investing in capital, we’re investing in human capital. We’re making people smarter so that they’re better at their jobs. Now, with AI systems, suddenly, again, you can just kind of buy knowledge capital as this thing that can be dropped in. Okay. An argument I hear…

Sonal: AI as a service, even.

Kyle: Right, how humans remain valuable is, well, we’re still social animals. We still are better than any machine at interpreting how other people are thinking about this and, you know, assuaging fears, or whatever it is — where the power of empathy is what humans will bring to the table. An interesting point you make is, actually, how humans accomplish a task — a doctor giving bad news about a cancer diagnosis. They are looking at the physical way that a person is moving their facial muscles, how their tone changes, how their voice cracks as they feel a certain emotion.

And if you look, that’s actually just pattern recognition, which is exactly what deep learning is good at. And so, is that even an advantage humans are gonna have, or are computers gonna be much better at looking not just at those exact same features that humans can, but also, like, zooming in on the eyes and looking at dilated pupils, and guessing at heart rate by looking at someone’s wrist or chest? What are humans going to be good at? What should people be investing in for, you know, the future to come?

Sonal: Yeah, what happens when human capital becomes commodified?

Yuval: We don’t really have an answer. Yes, many people, when they reach that point, they say, “Okay. We’ll invest in social skills, in empathy, in recognizing emotions.” The emotions are like the last…

Sonal: Emotional intelligence.

Yuval: The last frontier. But the thing is that emotions are not some spiritual phenomenon that God gave homo sapiens to write poetry.

Sonal: Another electrochemical, just like everything else.

Yuval: Emotions are a biochemical phenomenon. There are biological patterns just like cancer. When your doctor wants to know how you feel, he or she basically recognizes patterns in two kinds of data, as you mentioned. It’s what you say, and, actually, the tone of your voice — even more important than the content of what you’re saying. And, secondly, your body language and your facial expression. When my doctor looks at me at the clinic, she doesn’t know what’s the level of my blood pressure at the moment. She doesn’t know which parts of my brain are activated right now.

But an AI, potentially, will be able to know that in real-time using biometric sensors. It will have much better sources of data coming from within your body. So their ability to diagnose emotions will be better than the ability of most, if not all, humans. So what will humans do? We don’t know. Nobody really has an idea, a good idea, of how the job market would look like in 30 or 40 years. We’ll have some new jobs. Maybe not enough to compensate for all the losses, but there will be new jobs. Problem is, we don’t know what they are. Because the pace of change is accelerating, it’s very likely that you will have to reinvent yourself again and again and again during your lifetime if you want to stay in the game.

Sonal: Right, when you don’t have premature death anymore, and you live your full life, or you even have extended longevity through technology, you can reinvent yourself, like, 10 times until you’re 100.

Yuval: The basic idea for thousands of years was that human life is divided into two periods. In the first period of life, you mostly learn. You learn skills, you gain knowledge. And then, in the second part of your life, you mostly work, and you make use of what you learned earlier. This is now — it’s going to break down. By the time you’re 50, what you learned as a teenager is mostly irrelevant.

Sonal: It’s already true right now.

Questions for the future

Kyle: So, now, you know, again, thinking about autonomy — you know, we’re already seeing the shift towards smaller militaries with really advanced equipment and fighter jets. And we’re gonna see robots on the battlefield. As humans become less valuable economic actors, as they become less necessary to fight for power at, kind of, that scale, how does that factor into, you know, the extension or lack thereof of, you know, political agency?

Yuval: Most people today have absolutely no military value. In the 20th century, the most advanced armies relied on recruiting millions of common soldiers to fight in the trenches. Now they rely increasingly on small numbers of highly professional soldiers, super-warriors, all the special forces and so forth.

Sonal: Surgically targeted.

Yuval: And they rely increasingly on sophisticated and autonomous technology, like drones, and robots, and cyber warfare. So you just don’t need people militarily as before, which means not only that they are in danger of losing their political power, but also that the government will have a far smaller incentive investing in their health, and education, and welfare. Maybe the biggest project and achievement of most states in the 20th century was to build these massive systems of education, and health, and welfare.

Sonal: Safety nets.

Yuval: And you see this not only in democracies but also in totalitarian regimes. But if you don’t need them as soldiers or workers, then the incentive to build hospitals, and schools, and so forth diminishes. In a country like, I don’t know, Sweden, I think the traditions of the welfare state and the social democracy will be strong enough that the Swedish state will continue to invest in the education and health of most of the people there, even if there is no military or economic necessity. But if you think about large developing countries, it’s much, much more complicated. If the government doesn’t need tens of millions of Nigerians to serve as soldiers and workers, maybe it will not have enough incentive to invest in their health and education. And this is very, very bad news for most of the human race, which lives in places like Nigeria and not in places like Sweden.

Kyle: And so what’s the best course of action to follow if that’s the case? Is it, make sure that the most inclusive institutions possible are in place before that transition happens or…

Yuval: We don’t have enough time. I think that we are not talking in terms of centuries. We are talking in terms of decades, and once the transition takes place, especially in the civilian economy. In the military, it already happened. We are there. In the civilian economy, maybe we have 20 years, 30 years, 40 years. Nobody really knows. It’s a very short time. If we don’t have a workable model by the time the transition is in high gear, then it’s going to be both an extremely difficult situation for the majority of people in the world, and the social and political implications are going to destabilize the whole world, including the first world.

Sonal: You walked in your book, your new book, a lot about how there are three types of capital that — raw materials and energy, but people have ignored a third type, which is knowledge. And my question, from just an economic perspective, is how does this tie into how we think about growth? Especially given what you just talked about — this need to enlarge the pie in order to avoid war and violence.

Yuval: It’s often thought that there is a limit to the potential growth of the economy, because there is a limit to the amount of energy and raw material we have access to. But this is, I think, the wrong approach. We have a third kind of asset, which is knowledge. And the more knowledge you have, the more energy and raw materials you also have, because you discover new sources of energy and new sources of raw materials. I don’t think that we are going to bump into a limit in terms of, “Oh, there is not enough oil in the world. There is not enough coal in the world.” This is not the problem. The problem is probably going to come from the direction of climate change and ecological degradation, which is something very different. People tend to confuse the two problems — not enough raw materials and the problem of climate change — but they are very different.

Sonal: I actually wanted to probe about this, because in “Sapiens,” one of the things that you talked about was how we’ve had waves of climate change throughout the entire history of our planet. And I’m no climate change denier by any means, but I can’t help but ask the question, you know — whether we are the cause or it’s a cyclical effect — what it means for what the next cycle of change will be. Because the one thing that came through loud and clear was how every wave of climate change has brought about a corresponding change in human evolution.

Yuval: Well, there certainly have been many periods of climate change before, but it does seem that this time it’s different, that this time it is caused, to at least a certain degree, by human action and not by some natural forces, like plate tectonics or ice ages or things like that. And the potential impact for human civilization and for most other organisms on the planet is catastrophic. So, you know, it could be both natural causes and human causes at the same time. It doesn’t make it any better. It just makes it worse.

Sonal: The effects are the effects, right. In your book, you have this beautiful quote, which I thought was really straight articulation. “Modernity is a deal. All of us sign up to this deal on the day we were born, and it regulates our lives until the day we die. Very few of us can ever rescind or transcend this deal. It shapes our food, our jobs, and our dreams, and it decides where we dwell, whom we love, and how we pass away.” And I wanna know if you have any parting thoughts for people whose lives are being shifted by some of the technological change?

Yuval: Since the main theme has been technology and the future of technology and its impact on society and politics, I think that my closing thought is that technology is never deterministic. You can build very different kinds of political and social systems with the same kind of technology. You could use, you know, the technology of the Industrial Revolution — the trains, and electricity, and radio — you could use them to build a communist dictatorship, or a fascist regime, or a liberal democracy. The trains did not tell you what to do with them. In the same way, in the 21st century, we’ll have artificial intelligence and bioengineering, and so forth, but they don’t determine a single outcome. We can use it to build very different kinds of societies. We can’t just stop technological progress. It won’t happen.

Sonal: It’s inevitable.

Yuval: But we still have a lot of influence over the direction it is taking. So if there are some future scenarios that you don’t like, you can still do something about it.

Sonal: Yeah. Well, thank you so much for joining the “CFI dcast.” If people have not already read “Sapiens,” they must read that, and especially the new book, “Homo Deus: A Brief History of Tomorrow.”

Kyle: Thanks for coming in. We really appreciate your time.

Yuval: Thank you.