Best Case Scenarios

In this conversation, David Eagleman discusses the future of brain science over the next 25 years, focusing on the role of AI in enhancing human capabilities, revolutionizing dementia diagnosis, and the ongoing quest to understand consciousness. He emphasizes the need for advanced technologies to measure brain activity and explores the potential of pharmaceuticals and nanorobotics for cognitive enhancement. Ethical implications surrounding the reversal of death and memory uploading are also examined, alongside the impact of psychedelics and AI on scientific discovery and creativity. The conversation concludes with reflections on social media's role in polarization and the future of emotional manipulation through technology.

What is Best Case Scenarios?

We present the best case scenario over the next 25 years across different domains of life.

Kevin Kelly (00:01.644)
Welcome to the best case scenarios podcast. I'm Kevin Kelly. I'm here joined with Dan Pink, and we are going to solicit the most optimistic best case scenarios that we can find for the next 25 years. Our guest this session is David Eagleman, who's an adjunct professor of neurobiology and neuroscience at Stanford, author of many books. And we're going to explore the best case scenario in brain science.

So what we'd like to welcome David. Thanks for being here.

David Eagleman (00:37.506)
Great to be here with you guys.

Kevin Kelly (00:39.276)
We'd like to start with if you could give us the most optimistic scenario that you can imagine if everything in all your research and all your friends research worked out, what would we have in 25 years that might be a surprise for people?

David Eagleman (01:00.024)
Okay, well, so I knew you were going to ask me this question. So I actually have a few of them, if that's okay. One of them is I'm very interested in how AI is going to build better humans as in how it's going to make us stronger and better at what we're doing. you may know there was a case recently where a Swiss research team released some AI bots on a Reddit channel that is for the purpose of debating.

So people take a point and they debate with other people. And the idea is to see if you can get someone else to change their mind. It turns out these AI bots changed people's minds at about a rate of six times what humans could do. And so this became a real scary story in the news. And everyone said, wow, there are these manipulative AI bots that can change people's minds. But it turns out they didn't change people's minds by...

Daniel Pink (01:41.552)
David Eagleman (01:56.193)
deception or lying or things like that. They did it simply by giving better arguments calmly, empathetically. And it strikes me that what we can do is learn from AI in exactly the way that all chess players now train by playing against AI and it makes them better. Or with the game of Go, which, you know, was, was bested eight years ago by computers.

All go players now train and not only do they train with AI, they learn moves that in the 2,500 year history of the game of go, no one had ever thought of perfectly good moves. You could make them, but no one has made those. So I think that AI is going to really strengthen us and just open up our minds to a broader set of moves in whatever we're doing. so that's my very optimistic.

view about that.

Kevin Kelly (02:56.79)
Okay, so that's a kind of an education. It's a kind of being of learning and it's a cultural thing too, right? Because you need to keep on going, keep doing it. do you, what about something in terms of the actual biological aspects of brain science? Do you have anything you think would be happening there?

David Eagleman (03:21.72)
I do, I have a couple things that strike me as in 25 years, things are gonna be very different than they are now. One of them has to do with dementia. And I think we're gonna have much, much less of that in 25 years than we do now. the reason is, and the reason is people,

Daniel Pink (03:38.205)
Hmm.

David Eagleman (03:49.689)
will as they age, sometimes slip into mild cognitive impairment where they're having a harder time remembering things and doing things cognitively and in terms of perception and decision making and memory. But that is under diagnosed about 99 % of the time. And it's because, you know, someone only goes to the doctor maybe once a year and you go to the doctor, the doctor says, how you doing? say, I'm doing fine. And people have a million ways of denying things.

going wrong. think, look, I just, got a bad night's sleep or whatever. I think what we're going to quickly move to is having in-home monitoring both by something like your Alexa or Siri or whatever, or companies that are making in-home robotics or, you know, AI in the walls of the house or whatever, such that you're constantly in conversation with, you know, with your AI and it,

every day can tell how you're doing. It can do vocal biomarker analysis and it can do conversational analysis and it can see what's going on with your memory and with your cognition. And so I think that's going to be diagnosed much more at the same time. Of course, we're getting better and better at understanding the kinds of pharmaceutical interventions that can be done when you have mild cognitive impairment. The, it stands right now, once somebody has dementia, full dementia,

It's too late. There's really nothing we can do at that point. So the key is catching it early and doing things at that time. And by the way, think that one of the issues about, let me do this. And by the way, one of the issues about dementia is that it has everything to do with, sorry, it's not everything. Let me do that one more time. And by the way, one of the things about dementia is it has much to do

with whether you're keeping your brain active. And it turns out that, you know, lot of people when they retire, they end up just watching television and doing nothing much. The really important part is to keep your brain challenged. And I'm very optimistic with AI that we'll have all kinds of ways of doing that.

Kevin Kelly (06:03.18)
Okay.

Daniel Pink (06:03.388)
So David, go back to this AI system that you were mentioning before, which sort of as an analogy is sort of like a bathroom scale for...

for your cognitive capacities here. Is that something that we're gonna have at like a, you imagine sort of a check-in? So it's like, hey, let me check things out. Or is it gonna be, so is the vocal AI system so woven into what we're doing that we're constantly monitored and it can be a kind of an early warning system if it detects something that's askew.

David Eagleman (06:38.424)
So this turns out is my latest company. It's called Jojo Health. And I just launched this with a partner of mine. And it's a robot that sits there in your house and it detects when you've walked in the room and it detects if you're in middle of another conversation. If you're not, it starts talking to you and it does all kinds of great things in the domain of companionship. really very quickly becomes.

somebody that you want to talk with and confide in and so on. There's a lot of research on companion robots, but what this one is doing, what Jojo is doing is all kinds of testing under the hood. So Jojo not only is doing this conversational analysis all the time, but also occasionally says, Hey, let's play a game. just, I'm going to give you five words and, and then, you know, and then I'll give you some other words and you tell me if that was on the original list or not. Or

several other types of tests that can be done like that, but it's all worked into the conversation. It's just like having a friend who happens to be a neuroscientist living in your house. And so every day there's new data points coming in on this. That's the idea.

Daniel Pink (07:49.541)
And presumably we can turn off Jojo and say, Jojo, I'm not in the mood today to deal with this. I don't need a memory exercise here. And this is for people, this is not only for people who are living alone, this is for people who are living in families or with a partner or something like that as well, right?

David Eagleman (07:55.733)
Yes. Yes. No, you can never turn Jojo off. Yeah.

Kevin Kelly (08:00.878)
You

David Eagleman (08:11.193)
Um, yes, but I think the first market is going to be people living alone because those are the people who need it the most. If you're living with a partner, it's much more, these things get detected much more readily. Yeah.

Daniel Pink (08:15.995)
Yeah.

Kevin Kelly (08:21.048)
Right,

Daniel Pink (08:22.64)
Yeah. So what happens when what happens then when so we have this sort of early detection system here for cognitive decline. What happens once we cross a threshold where we're Joe Joe or whatever says hey wait a second Daniel you've got a big problem here or you've got to what happens.

Kevin Kelly (08:32.194)
You see, right, you, you, right.

David Eagleman (08:39.553)
Yes, exactly. When you set it up, you set up your emergency contacts. Those could be your adult children. It could be your physician, whatever. And they get a text saying, hey, you know what? These cognition scores have really gone down over the last month or two. so the key is once mild cognitive impairment is diagnosed, then there's all kinds of things you can do in terms of pharmaceuticals, in terms also of cognitive exercise, as in, okay,

Kevin Kelly (08:53.186)
Right, and what can we do about it? What will we be able to do about it? Is there remedies?

Daniel Pink (08:57.531)
Yeah.

David Eagleman (09:08.481)
Now there's really a problem here. We're going to make sure that we're cognitive games every day and so on. These things are, you know, the whole key is detecting it and then you can do stuff about it. Change of diet, change of other things.

Kevin Kelly (09:20.174)
So, right, so the human brain claims that the human brain is the most complex thing in the universe. And...

David Eagleman (09:29.689)
That we have discovered on our planet is all we can say really, but yes.

Kevin Kelly (09:33.568)
Right. And so in 25 years, how far along in your best case scenario are we in terms of understanding it and how it works and what we can do about it?

Daniel Pink (09:34.555)
Yeah.

David Eagleman (09:47.001)
Yeah, that's such a good question because I went to my first Society for Neuroscience meeting when I was a first year graduate student and I walked in and there were 20,000 people there and I thought, gosh, this whole thing's gonna be solved by the time I got out of graduate school. And it shocks me sometimes to realize how little we've figured out. These are very big, deep problems. Now, this is not to say there hasn't been an enormous amount of progress, but what we've mostly gotten is

Daniel Pink (10:02.482)
Hahaha.

David Eagleman (10:16.515)
data, data dumps of things. Now, the good news is with AI here right now, that's going to help us chew through all this data and hopefully do stuff. Like every field, the steps will be sudden. So we'll say, gosh, we really don't understand this. And then suddenly we will understand it. And I don't know if that's going to happen next 25 years. I certainly hope so. It would be very disappointing if it doesn't. Like I said, I'm very optimistic that AI is going to help us get there. But some of the very deepest questions like

consciousness, as in why does it feel like something to be alive? That's a deeper problem in the sense that we have no idea how to say, okay, look, you put these pieces and parts together and here's the algorithm and you do a triple integral and carry the two and that equals the smell of cinnamon or the taste of lemon or the beauty of a sunset. We have no idea how to bridge the objective and the subjective and I don't even see how AI is gonna help us with that.

yeah, I, in 25 years, we'll certainly have a lot more progress. Whether the deep questions will be solved. I don't know.

Kevin Kelly (11:23.918)
And if I could bestow upon you and all the scientists working on this a magic wand that would solve one impediment, what would you use it for? What's the gating pain point in terms of, okay.

David Eagleman (11:36.505)
I'll tell ya.

David Eagleman (11:41.901)
That's an easy one. It's that we have 86 billion neurons and each one of these is sending off signals. It's got these little electrical spikes going off 10 or hundreds of times each second. And we have no way at the moment of measuring all these things at once, of actually getting 86 billion guys and how they're chattering and what they're doing and correlating that with behavior and then soon with, you know, thought and so on.

We just can't measure anything that big. So the technologies that we have are all missing the sweet spot. We have things like functional magnetic resonance imaging where you stick someone in a brain scanner, you see these big blobs that represent where the blood flow is going. But each one of those voxels, those three dimensional pixels, that's representing millions and millions of neurons and we can't tell anything fast. Other things that give us fast data like...

recording from a neuron with a little tiny electrode, that just tells us about that one neuron out of 86 billion of them. And so obviously people are doing things where they're recording from hundreds of neurons and increasingly maybe thousands, but in the big picture, that's nothing. And so that's the magic wand that we need is a technology that actually allows us to see the activity in the brain at the right time scale, as in, know,

and the right spatial scale as in all of the neurons at once.

Kevin Kelly (13:07.054)
And is there any current research in developing that technology that would give you hope that maybe it might exist in 25 years?

David Eagleman (13:16.825)
No. I mean, the answer is there's tons of research. I and all my colleagues have been thinking about this and trying things out and people are trying all kinds of stuff, radar and so on, you know, near infrared spectroscopy. There's all kinds of things that people are trying, but I don't see anything that's right on the horizon. So the hope is that something comes out of left field that none of us were expecting.

Daniel Pink (13:38.32)
What about some of the interventions for cognitive decline?

to arrest cognitive decline or even to sort of enhance our thinking capacity. You had mentioned, and I'm wondering whether we'll get to a point in 25 years where doing crossword puzzles and memory games is going to seem primitive and anemic when we could have other more robust interventions. So for instance, is there a, do you see on the horizon a pharmaceutical invention maybe analogous to GLP-1s for

cognitive enhancement.

David Eagleman (14:13.859)
Yeah, that is what we are all devoutly hoping for. And it, it certainly seems possible. There are two issues. One is that, you know, obviously, I mean, the good news is that labs all over the world are studying this and looking for this. with more education planet wide and therefore more people going into scientific research, you know, we've really got the brain power applied to this, but

Obviously science goes off on incorrect tangents sometimes, and that's what happened with Alzheimer's. People for a long time thought if we could just address the plaques and the tangles, we've got this nailed. And it turns out that wasn't the right thing. And plaques and tangles are almost certainly what's coming out the exhaust pipe as opposed to the problem itself. And then the other thing is the way we design pharmaceuticals is it's, we're going to bind to this one

Daniel Pink (14:46.991)
Hmm.

Daniel Pink (14:50.757)
Yeah.

David Eagleman (15:09.251)
protein like this one receptor in the membrane or something and hopefully that's going to take care of everything. But almost everything is more complex than that. in some sense, it's quite extraordinary that we have gotten as far as we have by just targeting particular receptors. I think maybe in 25 years, what we'll be looking for is, hey, here's a really sophisticated way where we've applied AI to the problem and we see that we need to these seven things.

in this temporal order and then you get something coming out.

Kevin Kelly (15:41.164)
Yeah. And so there's pharmaceuticals. What about other kind of technological interventions with the brain? What's your best case scenarios for that direction?

David Eagleman (16:01.143)
Yeah. Really there aren't that many choices with the exception of invasive technologies where you put electrodes in there. So you can do that in two ways. You can either drill a hole in the skull and you insert electrodes in that way, or you can put things, electrodes up through the blood vessels that go into the brain and you're talking to the neurons and listening to the neurons on the other side of the blood vessels. So those are the two ways to get stuff in there. At the moment,

Daniel Pink (16:09.669)
Hmm.

David Eagleman (16:30.853)
Neither of those covers very much territory. It's very, very limited. Here's my best case scenario. I think the future, and I don't know if this will happen in 25 years, I think the future is in nanorobotics, which is you swallow a capsule with, let's say, 100 billion little tiny nanorobots in it, and these are essentially proteins. They're the size of proteins, but they're designed

Daniel Pink (16:39.547)
Okay.

Kevin Kelly (16:43.758)
100 years.

David Eagleman (16:58.009)
to swim around. they swim through the bloodstream. They have the right FedEx label so that they can cross the blood brain barrier. Then they have the right labeling so that they can cross through the blood vessels into the neurons and actually impregnate the neurons. And then they take up residents spread out all over the brain in your neurons. And each one of these little guys can send off a little signal, probably send and receive little signals.

And you probably would have to set this up with a mesh network and maybe wear some sort of baseball cap on the outside or whatever to pick up on these signals. But the point is you could then solve this thing that I was talking about a minute ago, which is having all the neurons know exactly when they're firing and who's doing what. That way you could really do it. So I think that's the intervention that we're not that close to, but that has to be where it goes.

Daniel Pink (17:53.411)
And the consequences of that would be to deal with neurodegenerative disorders or for cognitive decline or...

David Eagleman (17:59.609)
A technology like that would allow us to crack the major questions in neuroscience and by the way, take care of all these therapeutic issues as well. Yeah, exactly. Exactly.

Kevin Kelly (18:10.262)
It would be read and write in that sense. You could not just send a signal out, but you could send some signals into it.

Daniel Pink (18:13.388)
Hmm

David Eagleman (18:19.476)
Exactly.

Daniel Pink (18:19.576)
Right. these presumably these capsules full of these nano robots, these would be bespoke, right? My capsule wouldn't be the same as your capsule or would it?

David Eagleman (18:30.457)
I think it could, all it is, is you just have a bunch of robots and each one needs its own identity in a way of sending off signals. But the truth is you can do things, I've been thinking about this for many years, can have a protein, proteins have these little things that hang off called moieties that spin around. And I think you could actually put a little charge on it and get each one to send off radio frequency that way by spinning a charge. And then you make them slightly different lengths so that each one has its own

code its own identity that way.

Kevin Kelly (19:02.936)
Yeah, mean, infecting every single of our neurons with something in a mesh network, what could go wrong with that?

Daniel Pink (19:11.694)
Ha ha ha ha ha.

David Eagleman (19:14.131)
Yeah, there's a lot that can go right with it. And because we're talking about optimistic features, that's what I'm putting my money on.

Kevin Kelly (19:18.912)
Exactly. So it's very ambitious. I love that idea. and technologically, what else do you see in the future for brands? You say you had a whole bunch of them. What's another one?

David Eagleman (19:37.273)
Well, the other main one that I have is I'm just very interested. mean, this seems a little crazy, but I'm very interested in the possible reversal of death. So I have some colleagues at Yale who have...

Kevin Kelly (19:50.638)
Whoa. This is not the same thing as longevity. You're actually saying they died. Okay. Wow.

David Eagleman (19:56.034)
No, I'm talking about reversal. I'm talking about reversal of death. So my colleagues at Yale have done this experiment. I mean, they published this in science four or five years ago now where they took a pig brain that was dead and they were able to, by perfusing it with the right sorts of things, you know, get the activity back and get it running again. And they can do this with hearts, with brains. And this is already five-year-old technology, essentially.

What is death? It's this very complicated system of this leads to this leads to this is like, somehow the whole system grinds to a halt. But the question is, if you kind of, you know, blow on the embers, can you get the whole thing running again? And I would have thought this was the most crazy thing that I'd ever heard of until these papers came out. And then on my podcast, dinner cosmos, I interviewed one of the guys and we talked about it for an hour and it was

It's quite extraordinary. So that's one.

Daniel Pink (20:56.42)
So that seems so, hold on, just to stop here for a second. Reversal of death seems like a pretty big deal. Just, you know, on all these kinds of things. I mean, with all due respect, David, it seems like a bigger deal than JoJo. If we're able to, if we are able to...

Kevin Kelly (20:58.19)
you

David Eagleman (21:03.275)
It's in...

Daniel Pink (21:11.468)
if we are able to reverse death. But this raises some intriguing questions that go directly to a lot of the stuff that you have written, which is, do we have any notion when those pigs have been revived, let's say, or you've blown on those embers and the flame has built anew, does that pig still have the same memories?

David Eagleman (21:31.358)
it certainly should. Now there's all kinds of ethical things here. And so they haven't been able to really test for something like that yet, but memory, you know, is just, you know, it's, it's a burned into the circuitry. And so, and we know by the way that you can stop a brain and restart it. And the memories are still there. For example, when somebody falls to the bottom of an icy lake and freezes to death, and then miraculously, they're able to recover the person. and, you know, all their memories are still there.

Daniel Pink (21:44.836)
Right.

Daniel Pink (21:53.55)
Yeah, interesting, yeah.

Yeah.

David Eagleman (22:00.442)
So that tells us that you can freeze dry something and bring it back and it's fine. And so, yeah, you're right. The reversal of death is a giant one and I shouldn't blow over that fast, but I actually had it lumped with two other things that I jotted down, which is cryogenics and uploading, which are all to the same point, which is maybe we don't have to die anymore. I don't know if it'll be 25 years, maybe it'll be 75 years, but I think for, my great grandchildren,

Daniel Pink (22:11.514)
you

David Eagleman (22:28.025)
The whole concept of death is going be very different sort of thing. So cryogenics, as you guys know, is putting people in a deep freeze, like negative 79 degrees in liquid nitrogen. And the idea is we don't know how to thaw people yet, but it's a throw to the future where you say, okay, look, maybe, you I assume in 200 years they'll know how to unfreeze people and keep the tissue salvaged and so on. The problem is we have no idea what the world will be in 200 years.

Daniel Pink (22:36.751)
Yeah.

David Eagleman (22:57.719)
Maybe they'll unfreeze you and they'll all be cannibals or something, or maybe the world will be gone from World War V or whatever. But it's an interesting gamble for the future.

Daniel Pink (23:09.434)
Do you have a sense, how many people are actually doing that?

David Eagleman (23:12.697)
There are hundreds. There's a company called Alcor in Arizona that has these big liquid nitrogen tanks and they have hundreds of people so far that are in those tanks. You can either pay for just your head to get frozen or your whole body. And when you

Kevin Kelly (23:29.002)
Which one would you go with the head or the whole body?

Daniel Pink (23:31.618)
Hahaha

David Eagleman (23:31.922)
I would do, I would do whole body, because the head is clearly, I mean, this is the dentist representation of you, but you do have all this communication with the outlying areas. And so you might be a slightly different person without that. So I do whole body, but, what you do is you sign up for this and then, and then you have like a special tag. And when you're in the hospital dying,

Daniel Pink (23:34.446)
Yeah.

David Eagleman (23:55.596)
Alcor gets a call and they come, you know, flying in and they grab your body and they perfuse you with, you know, anti-freeze essentially. And they, you know, then they stick you in this big tank. Yeah. And, and the third one I wanted to mention in that category is, is uploading, uploading the brain to Silicon where it can last indefinitely uploading your consciousness. Now, obviously there are a couple of issues here, which is

Kevin Kelly (24:04.962)
Yeah. Go ahead.

David Eagleman (24:21.603)
Do we know for sure that if you replicate all the pieces and parts in their algorithm, that will be you and that will be conscious. We don't know that for sure. That is, I would say the main view in neuroscience is that probably we are just, you know, what's happening, our pieces and parts. And the reason we think that is because, you know, consciousness is so, you know, if you put alcohol in your body, that changes your consciousness, you put drugs, you're, you know, talking to...

Daniel Pink (24:30.179)
Hmm.

David Eagleman (24:49.313)
silver leprechauns or whatever, if you hit your head, you go unconscious, stuff like this. So it certainly seems like it is the stuff in the skull, but who knows? You guys know there are all these theories about consciousness maybe being this other thing that's outside of us and the brain has to have its integrity for that to be received. Who knows? But if it is the case, it's what's called materialism, if it is the case that

we are our material, then you should be able to replicate that. You could do it out of anything. You could do it out of beer cans and tennis balls or whatever, as long as it's running the same algorithms as your 86 billion neurons, then it should be you. And if that's the case, then we should be able to upload that. Right now, the brain is of such complexity compared to what we can do with computers.

that it's impossible, but I don't think it will be in 25 years.

Kevin Kelly (25:48.204)
interesting. Well, because you would need to able to read out. mean, as you said earlier, we're not even close to being able to get the status of the current neurons. And so you would need to be able to do that in order to upload it into.

Daniel Pink (25:49.005)
David Eagleman (26:04.676)
Maybe, maybe, or it is possible that all you would need is the structure. In other words, if you got three dimensional scan of it in great detail, that might be sufficient. And then you just get the whole thing running because a neuron is just a cell, just a specialized cell with these branches coming off of it where it receives signals and send signals. And yeah, if you could figure out, this is connected to that and that was connected to that and so on.

you know, each neuron has about 10,000 connections with its neighbors. Figure the whole thing out. That might be sufficient. Now there's an interesting. Go ahead.

Daniel Pink (26:39.661)
And when, I was just, after the upload, what happened? So you upload your consciousness, all right? Then what happens?

Kevin Kelly (26:41.858)
No,

David Eagleman (26:48.739)
This is exactly what I was about to say, good, which is there's something very interesting philosophically here, which is, the scan of your brain destructive or non-destructive? If it's non-destructive and Daniel, I scan your brain and then we upload that file into the computer and we start that and you're looking at it you say, well, okay, I see there's this little guy running around in there, but that's not me. It's just, it's a copy of me.

And when I die, when you're on your deathbed, you'll feel like, well, I'm not really living forever. There's just some other guy that thinks he's me living forever. But if we scan you and we kill you a millisecond before we start the computer, then it's like a transfer. you would wake up in the computer and say, I've just jumped over here. Yeah, so there's all kinds of very weird philosophical issues there.

Daniel Pink (27:38.414)
Yeah.

Kevin Kelly (27:43.586)
Yeah. So if...

David Eagleman (27:45.803)
If we kill you one millisecond after we start the computer, that's murder, of course.

Daniel Pink (27:51.097)
Bye.

Kevin Kelly (27:52.782)
Um, if we take a, uh, a trend over the past couple of decades, um, it seems like there are more and more psychedelics being available and more and more tolerance for them society wide. Uh, in 25 years, do you see that as a potential area of more

Even more psychedelics, even more potent, maybe even more known about them, maybe more use on a regular basis or therapeutic. What's that looking like? The best case scenario.

David Eagleman (28:30.041)
I'll just give you my knee-jerk reaction there, which is because psychedelics, there's been this renaissance of studying them scientifically, there's been all kinds of excitement in the field. this is going to cure depression, anxiety, Parkinson's, cancer, whatever, blah, And the fact is it won't, it'll be helpful for some things and probably not for most of the things that is being claimed right now.

That's what science is needed for is, you know, just sort of taking the wrinkles out of this with time, a bunch of studies will be run. And eventually with large enough random controlled trials, we'll figure out what, they're good for and not. So I don't know that I'm that hopeful that it's going to be like the big cure to something. It certainly seems to be helpful in, various things when combined with therapy, for example, for taking care of post-traumatic stress disorder and so on.

Kevin Kelly (29:21.336)
I wasn't thinking of them as cures. I was thinking of them as an enhancement. Not the therapeutic curing of illnesses, but the enhancement of insight and enlightenment or whatever.

David Eagleman (29:29.006)
Interesting.

David Eagleman (29:32.653)
Yeah.

David Eagleman (29:37.304)
Yeah, okay, so I'll tell you the way I think about this. My interpretation of psychedelics is, I think a little different from the way most of my colleagues think about this. When I look at what happens with psychedelics, know, they're just molecules of particular shapes that bind to particular receptors in the brain, and suddenly your consciousness is very different. What that tells me is that consciousness is very, very fragile, and I think Mother Nature goes through an enormous amount of work

to keep it just right, to keep it just in a particular range such that we can build societies and have conversations and interact with one another and do things that are useful. And that's what's made our species successful. But what psychedelics teach us is it's so easy to knock that off the track one way or the other. Now, in answer to your question, I think that yes, we're gonna find all kinds of synthetic ways to knock stuff off the track.

that's in between, oops, you just went unconscious and here's your normal consciousness. We're gonna find all kinds of ways to have weird consciousnesses. Obviously humans have spent their entire history finding these things, ingesting plants and chemicals of all sorts to say, ooh, here's one, here's one where I can get off the track. So yes, I think we'll have plenty more synthetic versions of that, but all it really shows us is you can knock it off the track.

Daniel Pink (30:49.74)
Mm-hmm.

Daniel Pink (31:06.572)
David, another, well go ahead, go ahead, Kevin.

Kevin Kelly (31:07.977)
Well, no, go ahead. Is this a question about psychedelics? OK, so go ahead with a follow-up. I do agree that I think this is a very finely tuned sweet spot in emergent phenomena. But the best case scenario would say, that's true, but it may be that we have not end

Daniel Pink (31:11.776)
No, no, keep on psychedelic, yeah.

Kevin Kelly (31:36.098)
We're not at the end of the evolution of it. That actually it might be possible to have a better version of whatever it is that we have in cold consciousness. And that it might be that chemicals are one of the ways that you could make an improved version of it. So it's whether or not you think that we are at the ultimate evolution of that, that it cannot be improved.

David Eagleman (32:05.849)
Here's my general sense as a biologist is that mother nature is smarter than we are and has generally optimized everything. That said, you know, I drink caffeine every day and you know, I've never smoked cigarettes, but people who smoke cigarettes say it makes them feel more alert and so on. So it may be that there are little tweaks we can do. And I take your point that

a lot of the stuff that we're yoked with, like the emotions we have and the anger and the jealousy and the this and the whatever, like all the emotions we have that there are evolutionary stories about the psychology there. Maybe there is a real benefit now in our world to taking drugs that make us much more empathic and see things in a different way. For example, I do a lot of work in the neuroscience of in-groups and out-groups of polarization, and we're so predisposed to this kind of stuff.

I do agree with you that maybe if we can all have things where we say, Hey, you know what? I'm seeing this from a higher level than humans evolved to see it from. And there's something really useful there. Yeah.

Daniel Pink (33:13.037)
Hmm.

Kevin Kelly (33:14.818)
Yeah, so damn.

Daniel Pink (33:16.792)
So I was so I want to should we get to I want to get the tropes in here in a second Kevin Let me ask let me ask. Well, go ahead. Let me ask one question. Let me ask one question though. Just sort of Looking the other direction Go back to the David of 2000 25 years ago. What would have what's going on today in 2025 that would have surprised the David of 2000?

David Eagleman (33:41.306)
Oh, AI, I mean, I've spent my whole career, everybody in neuroscience has always sort of snickered at AI. We've always thought it was cool, you know, little hot field networks and whatever we thought it was neat, but we're like, it's nothing like the brain. And then all of a sudden it's, you know, it's the biggest change for all of us of our whole careers to see the way that it can put stuff together. And by the way, it's in equal parts thrilling to me and also a little bit depressing because I look at the way that we generate

words and language and you know, I watch my children generating language and I think, God, there's almost a sense in which we see, okay, well, it's just a trick. You got this huge network and you see what is highest probability for the next word to come next. And it makes us feel much more like robots in a sense because now we can build it. Now we can replicate at least an aspect of it. It's not exactly the same as what the brain does, but

Boy, it has surprised us so much. So that's the David of just five years ago that it's blown my mind.

Daniel Pink (34:46.226)
Do you think the David of 25 years ago would have believed that we have AI with the capacity that it has today? If you had presented that to David of 2000 as the best case scenario, would he have said, you're full of it?

David Eagleman (35:00.249)
Yes. Yeah. It's just, mean, for example, you know, the Turing test where you see, okay, am I speaking with a real human or is it AI? That's in the dust. mean, we just, blew past that, you know, whatever three, four years ago. And, and I, I, that always seemed to me like such a distant fantasy, but, not only the Turing test, but other tests too. Like there's something called the loveless test, is can, this was proposed in 2013. Can AI be creative?

Daniel Pink (35:08.173)
Yeah.

Daniel Pink (35:12.728)
You

Daniel Pink (35:24.632)
Mm-hmm.

David Eagleman (35:28.537)
And by the way, when this paper was written, it seemed so crazy that the crazy example was, if you asked, you know, could you make an AI where you say, Hey, tell me a story about a boy on Mars who saves the world with the help of a girl and a cat. And it seemed like, ha ha, AI could never make up a story like that. And that's been blown away. So by the way, I don't know if you guys noticed, but I recently proposed in the literature, a new test for AI, whether

how we would know if AI is really intelligent as opposed to just a statistical parrot doing words. And my suggestion for that is can it do scientific discovery as in can it actually say, look, I'm not just piecing facts together, but I'm saying, what if there's something wrong with the framework we have and I'm gonna propose this new framework and I'm actually gonna simulate that out and figure out that, that, that, that.

Oh yeah, that actually explains the data better. I mean, that's how real science gets done. Einstein thinks about what if I were riding on a photon and what would the world be like and ba ba ba and it comes up with a special theory of relativity as a result of that. That's the kind of thing that I'm really interested in seeing in 25 years. Will we get AI to be able to do something like that?

Kevin Kelly (36:47.95)
So maybe we'll jump ahead with a long bet, but the very first long bet that was made at the Long Now Foundation was between Mitch Kapoor and Ray Kurzweil about the Turing test. And in 2029 was the bet. And five years ago, I thought there was no way that Ray's going to win. Ray was for it. Mitch was against it.

David Eagleman (37:05.879)
Wow.

Kevin Kelly (37:18.638)
But however, in reading the bet and talking to Mitch recently, said, you I saying, do you think you're going to concede? He said, read the bet. The bet is that the torrent has to take two hours. You get to interrogate it for two hours. And I don't think even today an AI

Daniel Pink (37:42.456)
Hmm.

Kevin Kelly (37:46.523)
that would fool me for two hours.

Daniel Pink (37:48.982)
I mean, Kevin, it's fooling some people who have basically boyfriends and girlfriends who they're spending more than two hours with.

Kevin Kelly (37:51.99)
I'm gonna go.

Yeah. So, so, but, but, but, but, but, so, so the one way in which you changing the turn test is to extend. It's like, well, if I live with this person for five years, would I know what I still think that they're a real person? You know, it's like, so, so yes, I think the scientific thing, but there is a time component that, that, that I had not thought about that I think is actually going to be very important.

Daniel Pink (37:58.347)
Interesting.

David Eagleman (38:24.505)
I certainly think by 2029 that will be taken care of.

Kevin Kelly (38:29.826)
That by 2029

Daniel Pink (38:32.343)
A two hour, a two hour Turing test. I agree with that completely. Yeah, but 20, I would bet a hundred, I would bet a hundred bucks on that.

David Eagleman (38:35.469)
Yeah.

Kevin Kelly (38:35.532)
Really? You would not be able to tell whether it was a human or an AI that you were.

Daniel Pink (38:41.301)
I'll take that bet right now. I mean, we've leapfrogged over into long bets, but maybe we can re-edit this. I'll take that bet. I'll double my money. I'll double the bet on that. A two-hour Turing test by 2020-29. I'm in.

David Eagleman (38:42.955)
I would too, I would too.

Kevin Kelly (38:46.318)
You

Yeah.

So, David, is there something that you feel sure about in 25 years that you would be willing to make a bet about it?

David Eagleman (38:56.632)
Yeah.

Kevin Kelly (39:09.91)
including a negative, something that you would bet would not happen in 25 years.

David Eagleman (39:18.913)
Okay, I think I might have a couple of these, but I'm trying to think of how to define it clearly. I'm sorry, I've got the hiccups. One thing that I'm optimistic about is I think by 25 years from now, most Nobel prizes will be won in large part because of the help of AI.

And maybe my long bet is that there will be a prize awarded directly to an AI that has figured something out by itself. And I don't know if the Nobel Prize committee will evolve to do that or whether they'll say, it's, you know, we'll give it to the people who programmed the AI originally. I'm not sure how that'll play out, but yeah.

Kevin Kelly (39:48.238)
Mm.

Daniel Pink (39:55.936)
Ooh.

Daniel Pink (40:10.646)
Hmm.

David Eagleman (40:14.924)
AI, I guess my bet, I'm trying to think how to define this really clearly, but the bet is that science will be driven largely by AI and we scientists will be sort of like the movie directors, but almost like an old movie director who people aren't really listening to that much and they're just sort of pointing the way. I don't know if that's...

Kevin Kelly (40:37.038)
So maybe I'm trying to recast this as a bettable, a dedicatable bet. Could you say that within 25 years, at least one Nobel Prize will be awarded to a group of which an AI is credited on the same level as an author or as a principal?

David Eagleman (41:04.685)
Yes. Although there's a sense of which that's already happened. The latest, you know, like with Alpha Fold, Demisys Abyss and his team that made Alpha Fold that figured out the structure of proteins. One could say that's already an AI co-author in a sense, but I guess.

Kevin Kelly (41:20.334)
But was it acknowledged in that way?

David Eagleman (41:23.593)
It wasn't the AI was not named as a recipient. Right. But I think, mean, especially once we've got this touring test passed, you'll have AI's that, that are like people like, here's, here's Prometheus. Prometheus just nailed another great thing in physics. And so we're going to give Prometheus the prize. guess it depends on.

Daniel Pink (41:32.481)
Could be.

Daniel Pink (41:35.969)
Yeah.

Daniel Pink (41:42.871)
I wonder if the first step in that direction isn't the Nobel Prize, which is a little bit more rarefied, but an AI wins a Grammy. An AI wins for best song. An AI wins an Emmy for an Oscar for best screenplay.

David Eagleman (41:59.598)
Yeah, yeah, that's right.

Daniel Pink (42:01.367)
In 25 years, I might.

I'll bet 100 bucks in 25 years that AI will get one of the EGOTs, Emmy, Grammy, Oscar, Tony.

David Eagleman (42:14.307)
That's good. And you know what'll happen then is there will be a split where we've got, yeah, but there'll be a split where we then have the humans only version of it. exactly like crack, exactly like the game of chess, you know, AI beat humans hands down, but now humans still play with one another and human only competitions. Yeah. But that's interesting. If you're going to spend, you know, 15 bucks on a movie, are you going to go to the AI movie, which is

Daniel Pink (42:17.367)
But that's 25 years. Yeah.

Daniel Pink (42:24.043)
Yeah.

Yeah.

David Eagleman (42:42.165)
extraordinarily better than any human screenwriter could do? Or is there still something about human screenwriters that we like and appreciate?

Kevin Kelly (42:48.93)
Yeah. OK. but I'd like to come back to your bet, David. We have Dan's bet. How about your bet in terms of the roles of AI in science? If there's some way that we can make that into a bettable, falsifiable.

David Eagleman (42:52.089)
Thanks.

David Eagleman (43:08.311)
Yeah. I guess it's that, I guess it's that an AI will win a Nobel prize will be named specifically as one of the co-recipients. Yeah. In 25 years. Yeah. I also have two, maybe smaller long bets, but I don't know. Again, I'm not sure how to make them rigorous, but I think in terms of lifespan, I think, okay, let me try to make it rigorous by 2050 average human lifespan will be a

Kevin Kelly (43:14.862)
Yeah, as a right, right. Bye in 25 years. Okay.

David Eagleman (43:37.377)
Okay. Average human lifespan will be above 90 years in at least 10 countries. How about that? I'll make it really rigorous that way.

Kevin Kelly (43:43.566)
In 2050, average life standards will be above 90 in certain countries.

David Eagleman (43:49.141)
In at least 10 of them. Yep. Yep.

Daniel Pink (43:49.289)
in at least 10 plus countries. And what's the baseline? What are we working against right now on that? Yeah.

Kevin Kelly (43:55.081)
What do we think it is right now?

David Eagleman (43:57.485)
It's slightly different for men and women. I can't remember if it's not my head, but men, it's like 74, 76 or something and women, it's a little bit higher. That's the average lifespan. there's no country with a 90 plus. There's no country with that, right? Average lifespan. Yeah. Yeah.

Daniel Pink (44:11.229)
Average. Lifespan. Yeah. Okay.

Kevin Kelly (44:14.72)
Okay, wow, that's ambitious. Okay, well, so.

David Eagleman (44:20.249)
Yeah.

David Eagleman (44:24.429)
And I have, I have a very, well, it's, I don't know if it's really a bet. I'll just mention it, which is so one. So I just patented something that I'm working now to get, implemented, which is a new algorithm for social media that reduces polarization. It's, you know, as we all know, algorithms currently increase polarization.

by, know, by feeding you what you want. And if you separate people out by what kind of things they like and click on, they become more extreme. This is a well-known phenomenon. And so it turns out that, I think there are ways to use social media algorithms to complexify relationships as in, let's say you and I, let's say the three of us have totally different views on some hot button issue. But when, when we are connected on the social media algorithm,

We don't know that because we all happen to love this kind of dog and we love, you know, surfing and we love whatever. And so all the things that we have in common get surfaced. And it's only much later after we've passed some connection threshold that we ever are exposed to the fact that we have different views on something. And by that point, we like each other. And so we say, wait, tell me about that. I didn't know you felt that way about it. And we listened to each other. So I guess it's that.

I guess the long bet would be that by 2050 social media companies, which I assume will still be there, let's say the dominant social media company will be using a relationship complexifying algorithm to reduce polarization in some measurable way. And I don't know if, I don't know if that's measurable easily.

Yeah, all right. We can bag that one.

Kevin Kelly (46:17.089)
Okay.

Daniel Pink (46:17.877)
I don't know, because I don't bag it because I'm on the other side of that bet. No, no, I'm just wondering whether there's going to be social media in 25 years and also whether there's any kind of economic incentive for that sort of like pro-social complexifying algorithm. But who knows? Should we move to tropes?

David Eagleman (46:21.561)
Great. Tell me, tell me.

David Eagleman (46:38.425)
As a side note, I think the economic incentive is just that for the company that does that, they get to advertise and show off, hey, we're actually making a better world. And so that's the incentive there. That's what I think.

Kevin Kelly (46:41.442)
Yeah.

Daniel Pink (46:54.038)
So what we also want to do here in best case scenario is offer up some common tropes on brain science from

science fiction novels, films, things that are in the popular culture, imagine and see whether you think that they're real. So let's go to one. Predictive policing. Essentially, knowing that someone's going to commit a crime and then stopping them before it happens. We've seen that movie. Exactly.

Kevin Kelly (47:23.426)
like in Minority Report.

David Eagleman (47:25.453)
Yeah, total, total fantasy. and the reason is even if you knew everything about someone's brain and all the neurons and whatever, your brain changes every second because of the conversations you have and the people you run into and whether you, you know, crashed your car that morning and whatever. mean, your brain is constantly evolving based on the world around you. And that's way too complex, even in a thousand years from now to, to predict the whole world. And, and therefore

The idea that you would know if somebody is going to commit a crime total fantasy

Daniel Pink (47:59.35)
Okay, what about, we have another one about instant knowledge import. So that instead of reading a Wikipedia page about the Peloponnesian War, get a zap of some kind and now I know everything about the Peloponnesian War.

David Eagleman (48:14.371)
So here's the thing, your brain and my brain and Kevin's brain, they're all very different in their wiring based on our thin trajectory of space and time that's led our brain to this point. So there'd be no way to say, hey, here's the Peloponnesian war in a little package that I'm gonna download. Cause to download it into your brain, I'd have to cast it in a particular way to download it into my brain, I'd be casting in terms of, it's sort of like,

what Hannibal did when he attacked Rome and blah, blah, and download Kevin's brain. It's like, okay, well, it's sort of like the way that this technology blah, blah, whatever. It's totally different analogies. And that's why we might have instant knowledge import in the distant, distant future, but it would only be after we can identify the exact specifics of an individual's brain.

Kevin Kelly (49:08.366)
I see. OK, so you need that map that we don't have that you were talking about earlier. If you had that, maybe you might be able to do an instant input.

David Eagleman (49:12.459)
Exactly.

David Eagleman (49:16.822)
Mm-hmm. That's right.

Daniel Pink (49:20.384)
What about telepathy, say via brain chips or something like that?

David Eagleman (49:24.555)
yeah, that via, via brain chips, that's, that's easy. And in fact, people are already doing that. You can even do that with EEG where, know, one person sitting in one room and your measure, but very simple, let's say a very simple video game where one person is seeing the screen and thinks, you should move the cursor up. And the other person gets a zap on their, know, on their arm muscle that tells them to move the cursor up. and so they can't see the screen, but they're playing the game. So telepathy versus.

telepathy via neurotechnology connecting brains. That's that's straightforward.

Kevin Kelly (49:58.644)
Well, okay, so straightforward in 25 years, do you think there'll be a lot of it? Do you think that people, let's say avid gamers who decide that they're winning the game is so, so important that they're willing to have invasive surgery and work with another team member? I don't know. What do think the state would be in 25 years?

David Eagleman (50:22.915)
Here's what I think. I think we're going to discover all kinds of ways that telepathy in this way via neuro tech is going to be useful. There'll be whole new games and things that emerge from this. And by the way, it doesn't necessarily need to be invasive, like getting an open head surgery. It could be EEG or if you can imagine the next versions of that in 25 years that are non-invasive or nano robots or whatever. So I think we'll discover all kinds of interesting things to do there. But generally,

I think we're not gonna want telepathy and here's why, because you don't want to share, even with the people closest to you, your spouses, your best friends, you don't wanna share all your thinking and your thoughts. We have this whole prefrontal lobe that, this whole frontal lobe that says, let me just start that over. We have this whole frontal lobe that says, okay, I've got this thought, maybe I'll say this, maybe I'll think that.

Okay, but here's how I'm to translate that into language. And what's actually going to come out of my mouth is just the appropriate thing to say rather than all the things I'm thinking like, you have some broccoli in your teeth or, you know, you look tired today or whatever, like all the things that you might be thinking. You certainly don't want to have a telepathic sharing with the world. You want to be able to have those barriers when you want them.

Kevin Kelly (51:44.632)
Well, unless you're able to actually tame or discipline that so that you're only sharing the kinds of things that you would normally share from the front.

David Eagleman (51:57.258)
that's speech anyway, that that is what we do with speech. So you might as well just stick with that.

Kevin Kelly (52:06.318)
Okay, maybe.

David Eagleman (52:10.519)
That's my two cents about neurotech telepathy.

Daniel Pink (52:12.148)
What, okay, another one. What about sort of some kind of intervention, chemical, electrical, implant to make you sort of like an instant genius implant or drug? It's kind of like Adderall on Adderall.

David Eagleman (52:31.385)
I don't think that's possible because genius is about putting together the experiences that you've had, the things you've seen, the, wait, if I twist this and wait, there's this kind of gear that could go here and wait, what if I put this lever over here and blah, blah, Like that's what genius is really about in any domain. And so the idea that a drug could do it, cause remember drugs are just, you know, some shape of molecule that binds receptors, that's not gonna do it.

And implant, also don't see how that would work in terms of inspiring insight. So I'm voting against that.

Daniel Pink (53:08.373)
Okay, here's one that I thought of. It's not a trope yet, but as I was going over these tropes and thinking this through, do you think there's a prospect for, I'm gonna call this memory leasing, all right? Where I could essentially lease you my memories. Or even better, I could lease LeBron James' memory of winning a championship and could simulate that in my head and feel

David Eagleman (53:35.522)
Okay.

Daniel Pink (53:38.397)
exhilaration that he felt when that happened.

David Eagleman (53:41.441)
Okay, great question. There was a movie in the 90s called Strange Days, which was predicated on this idea that you can record from someone's head and then you can put this thing on. I actually did one of my

Kevin Kelly (53:46.744)
yeah.

Yeah.

Daniel Pink (53:49.493)
Kevin Kelly (53:52.952)
Well, because what they were doing was they were doing snuff films, basically. They were going out and committing crimes and people were enjoying the sensation of that crime without having committed the crime.

David Eagleman (54:06.733)
That's right. That's one of the things that happened with it. Exactly. But the idea was that this technology is where you can record your memories and someone else could watch your memories generally. Yep. I did, I did one of my podcasts.

Daniel Pink (54:07.495)
huh. Yeah.

Kevin Kelly (54:08.876)
Yeah, right, right, right, right.

Daniel Pink (54:16.726)
Not only watch but actually experience like someone says my god I had such a great experience doing something my memory of this is so vivid and wonderful here rent it on the iTunes store

David Eagleman (54:25.943)
Right. Nope, I got it. I got it.

Yep. The answer is totally impossible. for the same reason that I mentioned about uploading knowledge, which is that for, have a memory requires plugging in and knowing exactly what my brain is about, what your brain is about and how, how this is like that. And when, and when this woman said this to you, it was like when you were in college and you had this girlfriend who said this thing and,

Daniel Pink (54:33.725)
Really?

David Eagleman (54:57.753)
how her hair reminds you of the way that this other woman you know does her hair. Everything we experience is plugging into this enormous matrix of our memories, our own experience. So I don't think so. Yeah, I mean, you can obviously have something like television or movie, that's easy enough to do, but we already do that. And that's not quite a

Kevin Kelly (55:18.798)
So another Hollywood trope is a figure of the planet apes. It's like we've been talking about human brains, but what about animal brains? Enhancing, amplifying animal brains, making them smarter and smarter until we have planet of the apes.

Daniel Pink (55:29.076)
Hmm.

David Eagleman (55:30.616)
Yeah.

David Eagleman (55:36.634)
Right. I'm pretty enthusiastic about that possibility. We don't know exactly the path to take to get there, but you know, if you look at an ape's brain, it's basically the same as ours. mean, it's all, by the way, this is true at dog brain, a horse brain, anything you look at, it's all the same stuff. You got your hippocampus, your share of belly, your cortex, or what, the neurons are just the neurons. There's nothing terribly different about what's going on. Somehow what Mother Nature discovered was this tweak

to make it so that we were able to do long-term thinking and put ideas together in a deeper way. Now, obviously we've got other advantages too, like opposable thumbs and language and having a larynx and stuff like that. That makes a giant difference in terms of passing the information along. But I think the algorithm to say, okay, look, Mr. Ape, you've seen this and you've seen that and you've seen that. How could you put those things together?

and hypothesize, hey, wait, what if I did blah, blah? I think if we can figure out what that algorithmic difference is, because it's not big, then we should be able to help our neighbors out.

Daniel Pink (56:42.888)
Hmm. Hmm.

Kevin Kelly (56:49.272)
So do you think it's a change in the brain structure or do you think it's, I mean, like the entire brain structure or do you think it's a particular organ that's missing?

David Eagleman (57:03.573)
It's not an organ. think it's going to be, it's something algorithmic. And, and what that means is like, okay, look, if you have a little bit more of this receptor and you have slight, you know, tweak in the way this is wired to that or something, then you get these things running around where you have feedback loops and you have whatever in the right way. So, so that's what it's going to, obviously I'm speaking loosely because we don't know what that is yet, but it's going to be something like that.

Kevin Kelly (57:30.038)
Yeah. Could it just simply be something that one that was trained that was it was a matter of of.

David Eagleman (57:37.882)
No, cause people have tried training apes, know, cocoa and people have spent their whole lives training and it's not sufficient. But, but the point is if I look at my Mac laptop and I look at my toaster, they're both made of the same stuff. They've got wires and transistors and whatever. And so if I could just take the toaster and figure out, right, what do I need? Oh, if I add this thing and I, Ooh, and I add a screen and whatever, then you start getting somewhere. That's too extreme a difference that, that example, but yeah.

Daniel Pink (58:06.419)
This is a trope, but it's also a trope with some prickles of reality to it. So we'll go to the, you know, I'll see Kevin's Planet of the Apes and raise him Dr. Doolittle. You know, being able to talk to the animals. And there's been some very interesting stuff now about our human beings' ability to use other kinds of signals that other animals are giving, whether it's the clicks of whales or other kinds of sounds as a way to decipher what's in their head.

You think there is a prospect of cross species communication. I mean, I guess there is in the same way that you can say to your dog, you can say stuff to your dog, but a more robust form of cross species communications.

David Eagleman (58:41.975)
Yes, yes.

David Eagleman (58:48.405)
Right. Well, let me answer that in two ways. So one is several friends of mine are working on these projects of using AI to decode animal language. And I'm very enthusiastic about that. And I think there will be all kinds of progress there in the coming few years. That leaves the question of, if we can figure out whale communication,

Daniel Pink (59:00.52)
Yeah.

David Eagleman (59:14.669)
They still might, well, whales are probably pretty smart, but let's say we figured out communication between vervet monkeys. You know, maybe they're not smart enough. Maybe the communication is mostly, watch out, or hey, climb the tree, or hey, get down from the tree. And not much more than that. And so we still might need animal uplift to help them say something more interesting. But the other thing, the other thing I want to point out though is what you said is, hey, can we,

Daniel Pink (59:27.879)
You

Daniel Pink (59:38.834)
I see.

David Eagleman (59:44.25)
actually tap into what they're experiencing or how they're speaking. And as you may know, I've been very enthusiastic for a decade and a half about sensory substitution and teaching ourselves new information in the brain by pushing it in that way. for example, know, most insects and a lot of mammals and birds pick up on the magnetic field of the earth. That's just something that they can, you know, detect readily. And it would be awesome if we just

Daniel Pink (59:55.858)
Yeah.

David Eagleman (01:00:13.785)
took that on as a typical thing. Many insects like honeybees will see in the ultraviolet range. That's straightforward. We could do that too. Rattlesnakes see in the infrared range and so on. So I think we'll have better conversations with other animals as we come to understand their world better.

Kevin Kelly (01:00:32.558)
I saw just, was it last week or this month, an announcement of contact lenses that expanded the spectrum for humans. Yeah.

Daniel Pink (01:00:41.937)
wow.

David Eagleman (01:00:44.345)
Cool. this, this I gotta see. I assume it's with ultraviolet because yeah, because it turns out our lens blocks ultraviolet, but our retina actually can see ultraviolet. And the reason it's got discovered is because people get these artificial lenses implanted when they have a problem, they need to get their lens taken out and artificial one put in, but those don't block UV. And suddenly people realize, wait a minute, I'm, picking up on things that other people aren't seeing.

Kevin Kelly (01:00:47.725)
Yeah.

Yeah. Yeah.

Right. Right.

Daniel Pink (01:01:10.014)
wow.

Kevin Kelly (01:01:12.183)
Right, Yeah. Let's see, any more tropes, Dan?

Daniel Pink (01:01:12.328)
Wow.

Daniel Pink (01:01:25.044)
One other other trope. What about my and someone entering my dream or my entering their dream? So I become I insert myself as a character in their dream.

David Eagleman (01:01:34.848)
Kevin Kelly (01:01:38.52)
I see.

David Eagleman (01:01:40.74)
Fascinating. Okay. Here's what we can do right now is measure what's happening in the visual cortex and actually reconstruct somebody's dream pretty, pretty well as in, I think you were flying and there was an elephant that came in the room and blah, so because the primary visual cortex can kind of be read off like a television screen, a warped TV screen. And so some groups have shown that in the last few years, but inserting yourself again would have to be.

You'd have to have what's called system identification. You'd have to know their brains so well that you can say, okay, now I want Daniel Pink to enter their dream. And you'd have to stimulate all the neurons that for them equal Daniel Pink and then have you do particular things.

Daniel Pink (01:02:16.02)
Hmm.

Daniel Pink (01:02:24.765)
Sounds difficult, yeah.

Kevin Kelly (01:02:27.288)
So here's another trope, and I'm going to expand. Another trope is being able to dial up emotions. And let me just preface this by saying that you were talking about your own shock within the last five years of AI having some kind of reasoning and being able to do things that many, many people said would never be done. That computers can never be creative, and we already see that they can be a little creative. I think your bigger shock is coming

as we code into, engineer into, emotions into the AIs. We give them the emotional quotient of stuff and the degree to which people will be bonding to them, not just broken people, but very well adjusted people who have dogs and love their dogs. They will start to love these agents. so I think part of what this will come out is it will start to unbundle.

some of the concepts of emotion that right now we don't really know what they are. We don't have good names technically. And as we unbundle and as we try to put these into the machines that forces us to be clear and more precise and more understanding about what they are, can we then also use that knowledge to...

enhance, soften, manipulate human emotions in some way or other to for a benefit, for therapy, for whatever. So what about the emotional aspect of the brain?

David Eagleman (01:04:02.244)
would say two things here. One is we're already there with AI impersonating emotion. So it's not clear to me that we would need to actually make the AI experience it. It already has a perfect job acting angry or happy or whatever.

Daniel Pink (01:04:09.235)
Hmm.

Kevin Kelly (01:04:13.838)
I just want to, this whole distinction between whether it's real or not, I think we have to get rid of it. It's just impersonating thinking. Is it thinking? It doesn't matter whether you impersonate the emotion or whether you're doing it. It doesn't matter. This is separate.

David Eagleman (01:04:26.039)
agreed. I agree. And I was just pointing out that AI already does that. But your question is about, we get humans? What's interesting is that, I mean, that's in a sense, what storytelling is, we're dialing people's emotions up or down like a thermostat or any sort of political thing. You know, you make people, hopefully you make them angry at some other group or whatever. think what, and you're asking maybe the question beyond that, which is, okay, what if we have a neural implant?

Daniel Pink (01:04:45.66)
Yeah.

David Eagleman (01:04:54.489)
Could we actually crank things up or down? Certainly we could, I think we certainly could, but it's a very interesting thing because would you, you you could turn down fear and make yourself brave, or does that make you reckless? You could turn up joy and make yourself ecstatic, or does that make you manic? I don't think we'd be very good stewards of dialing things around.

unless it was for very particular reasons, very particular scenarios.

Kevin Kelly (01:05:25.922)
You certainly can imagine military people deciding that for this next day, no fear.

David Eagleman (01:05:33.707)
Okay, fine, but just use robots. Like, I don't know why you would need to get an expensive human in there.

Daniel Pink (01:05:34.099)
Mm-hmm.

Yeah, I imagine you could you could there might be a way to put sort of like a governor on it like don't dial up my joy beyond this amount or you know Yeah

David Eagleman (01:05:46.883)
how tragic. But why?

Kevin Kelly (01:05:47.0)
Yeah.

Daniel Pink (01:05:49.475)
Well, because to avoid the mania, but we have things like that on the, you you're talking about any kind of storytelling or rhetoric or anything like that that has the effect of dialing it up. But we also have psychopharmacological drugs that serve that purpose in some way. That is, you know, they prevent you from, you know, spinning into anxiety or falling through the floor of depression or, you know, they take your mania and

David Eagleman (01:05:53.017)
got it.

Kevin Kelly (01:06:15.438)
rage. Yeah.

Daniel Pink (01:06:19.419)
pull it back.

Kevin Kelly (01:06:21.422)
Yeah. Yeah, it could be drugs rather than a device.

David Eagleman (01:06:22.435)
That's right.

Daniel Pink (01:06:25.577)
Yeah.

David Eagleman (01:06:26.147)
That's right. Although I'll just last thing I'll just mention that people are doing this with devices. You put in a what's called deep brain stimulation to keep people out of the lows of depression. And that seems to be pretty successful.

Kevin Kelly (01:06:37.422)
Okay, so Dan, you wanna go to the last part?

Daniel Pink (01:06:42.131)
Um, yeah, so we, the last part is, so the, so the final segment here is called three tips for the road. Um, so David, we're going to ask you for some recommendations. So, um, when we think about the future of brain science, when we think about the best case scenario for brain science, what is one thing that people could read to help them understand this emerging world?

David Eagleman (01:07:16.761)
I don't, it's all moving so fast. I'm afraid I don't think there is something to read. I, okay. I mean, here, okay. I feel bad doing this, but I think I would recommend my podcast, Inner Cosmos, because that's where every week I'm pushing on. Okay.

Kevin Kelly (01:07:24.098)
Watch this to watch

Daniel Pink (01:07:25.915)
Watch.

Daniel Pink (01:07:34.279)
Great. So let me rephrase the question. So we're now moving on to three tips for the road where we get your recommendation. So let's start with David, what is one book to read, something to listen to, something to watch to help us understand this emerging world more clearly?

David Eagleman (01:07:57.092)
Here's what I would say. There doesn't exist a book on this yet only because it's all moving so fast. So although I feel a little guilty doing this, I would recommend my podcast, Inner Cosmos, because each week I'm pushing on the envelope of the future, either with monologues or with guests talking about each of these issues. So that'd be my answer.

Daniel Pink (01:08:20.453)
And what is, so let's talk about an action now. What is, is there something that each of us we can do to, beyond reading and absorbing the information to move into this best case scenario world for brain science, a behavior that we can adopt or a practice that we can adopt.

David Eagleman (01:08:37.719)
I mean, I think one thing I would say is the important thing for all of us is keeping our brain as healthy as possible. And the way to do that is not only with diet and sleep and so on, but challenging yourself every day, every hour, keeping yourself in between the realm of frustrating, but achievable. that is, you know, until we get these great pharmaceutical or neuro tech interventions, that's the best thing that we know of, to keep your brain.

healthy is to really challenge it. So that would be my piece of advice there.

Daniel Pink (01:09:14.461)
Great, and then finally, what is a question that you are asking yourself about this world or that we should be asking ourselves as a society about this emerging world?

David Eagleman (01:09:28.185)
I guess the thing I'm asking myself right now is how can we make sure given the yin and yang of AI suddenly appearing on this, on the scene, how can we make sure that we are making ourselves into better, stronger, smarter humans by leveraging that as opposed to, you know, getting taken over by the wave? how can we surf the wave? That's, that's what I'm asking myself. And there are happily so many ways to do that.

But that's the thing on my mind. How do we make AI our partner rather than a rival?

Kevin Kelly (01:09:59.566)
And with.

And what are some of the ways that you yourself are navigating that balance?

David Eagleman (01:10:07.263)
yeah, it's, it's trying to maximize my own creativity using AI as a tool rather than, saying, well, you know, forget it. It can draw better than I can draw, or it can animate something better than I can. I, you know, trying to figure out, okay, great. Now that we've got a tool that can do this, how can I make the best next thing?

Kevin Kelly (01:10:33.454)
That's fantastic. So David, thank you for this. You've inspired and given me great optimism about the next 25 years in brain science. I think it's fabulous. And I think also you might have kind of outlined some ways that people can aim things that we could aim for, which is the purpose of thinking about this. Here are some technological goals that we can try and make happen. This little

David Eagleman (01:10:44.761)
Great.

Kevin Kelly (01:11:03.042)
diffused nano bots that are in our brain, that would be an incredible thing to achieve. And maybe somebody out there was inspired to work on that. So thank you for taking time and giving us your best case scenarios.

David Eagleman (01:11:15.779)
Such a pleasure. Thank you. Great to see you guys.

Kevin Kelly (01:11:20.312)
We'll sign off. Until next week.

David Eagleman (01:11:23.337)
Okay, great. Great. By the way, Daniel, I wanted to ask you, is that a real library behind you? That's your

Daniel Pink (01:11:24.869)
Alright.

Kevin Kelly (01:11:29.837)
That's it.

Daniel Pink (01:11:30.488)
Yeah, yeah, see. It's a...

David Eagleman (01:11:32.387)
That's awesome.

Kevin Kelly (01:11:35.618)
Yeah. He's written all those books. Those are all the books that he's written.

Daniel Pink (01:11:35.71)
This is just my garage.

Daniel Pink (01:11:40.208)
Yeah, this is just my... Yeah, it's a great backdrop, but, you know...

David Eagleman (01:11:44.761)
Ugh.

Yes. that's so lovely. And Kevin's got great bookshelves too at his house. Of course. I love that. Do you guys, do you guys have the same sense that I do that whenever I visit someone's house, who's let's say under the age of 40, nobody has bookshelves. Like they don't, they don't even own bookshelves. They don't have anything. Yeah.

Kevin Kelly (01:12:05.432)
Yeah.

Daniel Pink (01:12:06.034)
no.

Kevin Kelly (01:12:09.196)
Yeah, it's a shame. So I'm going to stop the recording. And the, yeah, you're riverside, so you know about this.

David Eagleman (01:12:19.31)
I'll keep my tab open. Yeah. Yeah.