Chasing Leviathan

In this episode of Chasing Leviathan, PJ and Dr. Iain Thomson discuss his book: Heidegger on Technology's Danger and Promise in the Age of AI. Together they explore the essence of technology, its historical implications, and the philosophical underpinnings that shape our understanding of technological advancements. The discussion delves into the relationship between technology, metaphysics, and education, emphasizing the importance of pedagogy in navigating the complexities of modern technological society. Thomson also delves into the impact of AI on the workforce and the importance of education in cultivating a meaningful life amidst technological advancements.

Make sure to check out Dr. Thomson's book: Heidegger on Technology's Danger and Promise in the Age of AI (Elements in the Philosophy of Martin Heidegger) 👉 https://www.amazon.com/dp/1009629433/ 

Check out our blog on www.candidgoatproductions.com

Who thinks that they can subdue Leviathan? Strength resides in its neck; dismay goes before it. When it rises up, the mighty are terrified. Nothing on earth is its equal. It is without fear. It looks down on all who are haughty; it is king over all who are proud. 

These words inspired PJ Wehry to create Chasing Leviathan. Chasing Leviathan was born out of two ideals: that truth is worth pursuing but will never be subjugated, and the discipline of listening is one of the most important habits anyone can develop. 

Every episode is a dialogue, a journey into the depths of a meaningful question explored through the lens of personal experience or professional expertise.

What is Chasing Leviathan?

Who thinks that they can subdue Leviathan? Strength resides in its neck; dismay goes before it. It is without fear. It looks down on all who are haughty; it is king over all who are proud. These words inspired PJ Wehry to create Chasing Leviathan. Chasing Leviathan was born out of two ideals: that truth is worth pursuing but will never be subjugated, and the discipline of listening is one of the most important habits anyone can develop. Every episode is a dialogue, a journey into the depths of a meaningful question explored through the lens of personal experience or professional expertise.

PJ Wehry (00:03.95)
Hello and welcome to Chasing the Viathan. I'm your host, PJ Weary, and I'm here today with Dr. Ian Thompson, professor of philosophy at the University of New Mexico. And we're here to talk about his book, Heidegger, on technology's danger and promise in the age of AI. Dr. Thompson, wonderful to have you on today.

Iain Thomson (00:22.096)
Thanks PJ, glad to be here.

PJ Wehry (00:25.176)
So Dr. Thompson, me, I mean, immediately AI is one of those algorithm buzzwords, but why this book? Why did you feel, you know, it's nice kind of pithy, like from the Cambridge elements, why'd you feel the need to write this book?

Iain Thomson (00:39.942)
Yeah, good question. So I've been working on Heidegger and technology since last century. It was a big part of my PhD dissertation back in the day. And so in 2005, I brought out a book called Heidegger on ontotheology. And the subtitle is technology and the politics of education. So I was already thinking a lot about the way technology is transforming our understanding and practice of education.

especially higher education, which is the field I work in. I also was a student of Hubert Dreyfus at Berkeley. And he was a pretty famous critic of AI. That's how he ended up at Berkeley. It's not like Berkeley decided they needed a Heidegger scholar. It's more they decided they needed the David who slew the Goliath of AI at MIT. So he had that kind of legacy of...

being a critic of AI and all of us who came up as his students were immersed in that pretty deeply. So for me, it was really interesting to see the breakthroughs that we had with deep learning in AI and self-driving cars, which in a lot of ways seemed like things that should not have been able to happen given the driver's critique. So I found that very provocative and wanted to go back.

to Heidegger and kind of think through all of that. mean, part of my stuff on Heidegger in education is that the teacher isn't a know-it-all. They're the person who helps people learn how to learn by learning in public. So by showing our own learning in both senses, we help other people learn to learn and learning about the latest developments in AI, but also in fields like.

genetic engineering was part of what was involved in writing this book. The first half of it's sort of an examination of what it is about technological developments that make people anxious and worried, which is where Heidegger's own critique started from, and what they are today. He was thinking about atomic weapons, which are big here in New Mexico still, unfortunately. But so there's some atomic weapons and then interesting ways in which AI development echoes and reflects some of the

Iain Thomson (03:03.62)
historical developments we've seen in atomic weapons.

PJ Wehry (03:09.422)
Yeah. And forgive me. I know this is probably an elementary question, but especially with Heidegger, you know, elementary questions often become complex very quickly. When you say technology and you're using Heidegger's definition of technology, can you talk about what he means by technology? Because especially at a popular level, it just means anything with electronics, but then you start examining that definition. It doesn't work. So tell us a little bit about when you say technology, what do you mean?

and what does heidegger mean?

Iain Thomson (03:38.982)
Yeah. As you said, the elementary questions are actually the difficult ones. Alexander Neumann used to make a joke about never agree to fight a guy named Tiny. So when somebody says, I just have a little question. I just have a small question. That's when you need to get worried. And that's true. So the most difficult part of the book.

PJ Wehry (03:51.274)
You're right! I like it.

PJ Wehry (03:57.73)
Yeah, no, no, hey, we only got, you know, it's only an hour, right?

Iain Thomson (04:08.422)
I don't know about reading it, but to understand in Heidegger is what he means by the essence of technology. So his view is that you can't understand technology by understanding machines and electronics. Instead, you have to understand machines and electronics by understanding the essence of technology. And what the essence of technology turns out to be is something that machine technology like Chet GPT and

know, genetic engineering, various ways in which we come up with to intervene directly on the human genome and other, the genome of all organisms. These are just like, when you look at a mountain and you see, like I live right at the foothills of the Sandia Mountains, and when you look at the mountains, there's like distinctive rock formations that really say Sandia is to you or whatever the mountain is, and.

The way in which a particular rock formation suggests the mountains as a whole, that's his view of how contemporary technologies reflect the essence of technology. So they're just kind of like a paradigmatic or exemplary expression of something that helps us understand it. But they're not co-equal to it. It's deeper than them. It preceded them historically and enabled their development.

So what I show in the book, and it's about 10 pages of the book is of a 60 something, is that there's sort of three different meanings of the essence of technology. One of them is that the essence of technology is nothing technological. And that's a kind of like ground clearing point about don't think I'm looking at a bunch of technological devices and inductively abstracting their essence from that because A, that's circular.

because you have to know what technology is to know what the technological devices that you're looking at to abstract away their essences. And B, he thinks technology precedes technological devices. So the second understanding of the essence of technology turns out to be metaphysics. So he says the essence of technology is identical to the essence of modern metaphysics, which I show means he ends up distinguishing two.

Iain Thomson (06:30.914)
epochs in the modern age. So there's early modern and late modern. The early modern is what he'll end up calling subjectivism. And that's the subject object dichotomy and the understanding of all things as either thinking subjects or external objects, external to subjectivity. And then the view that all value and meaning comes from us. We subjects pop posit values onto things or project or invest things with value things.

Objects themselves are inherently valueless. The fact value dichotomy follows from the subject object dichotomy. And that's part of what he thinks is partly responsible for the nihilism of the contemporary world. The meaninglessness of things is that we think everything, the meaning of everything comes from us. And if that's true, then we can just take it back. So everything ends up becoming very fragile and has no weight on us.

PJ Wehry (07:22.231)
Right.

Iain Thomson (07:28.65)
And that's the first sense of second essence of technology is early modern metaphysics. The second sense of the second essence of technology is late modern metaphysics. And that's that a important transition happens between Kant and Nietzsche. So in Heidegger's view, Kant is thinking Descartes un-thought. Descartes is the one who turns us into subjects by showing us that we have this certainty of our own.

consciousness that we don't have of anything else. that because the consciousness becomes the center of the universe basically. But in Nietzsche you get a thinking of the unthought of Kant, which is this idea that God is dead, which Heidegger takes to mean like the physical and the metaphysical have been pulled apart. We finite beings can't reach to a beyond outside this finite world to ground.

this world in. the traditional metaphysical project of wanting to find the ultimate foundation of things, in fact, wanting to find two different ultimate foundations, an innermost core, that's the ontological, and then outermost perspective, that's the theological, and then link those together. In his view, that's what metaphysics has been since at least Plato, and I trace it back to Thales and the student axiom. So that's to say metaphysics is onto theology. And in the

late modern, the Nietzschean age that we're in and the tiger's trying to think the unthought of to help us move beyond modernity into what he calls the other beginning, what is literally the postmodern is in the Nietzschean age we understand all things he thinks as eternally the current world of power. So the essence of all things, the tiniest component out of which everything is made is world of power. That means everything is nothing but force striving against force.

to keep force in force for as long as possible. And then if you zoom out and ask like, what shape does the universe that's driven by the engine of will to power take? The answer is eternal return or a river of becoming flowing endlessly in the circle. So this idea that we have these kind of ontological lenses on that we understand what it means to be.

Iain Thomson (09:44.482)
as eternally recurrent will to power or as nothing but force striving against force to keep force in force as long as possible is what he calls it framing. That's the technological understanding of being that turns everything into a resource to be optimal. That is when we understand everything is just force striving against force, we tend to relate to everything as meaningless stuff just to be.

put on standby to be put into the standing reserve to be optimized as efficiently as possible. So we're not just treating the external world that way, the way we did in the early modern world where we reduce everything, the stockpiles of objects. We're even treating ourselves that way. And this was sort of coming out of his critique of genetic engineering and seeing what was happening with biology's ability to start transforming the human genome and his worries about

whether we might end up accidentally or deliberately transforming the very nature of what it meant to be human in a way that would kind of trap us in this age that we're in, something that's never happened historically, since every, historically every age has given rise to another. But let me just say, because I flagged it in my mind earlier, that I still say in my mind, that's a very Cartesian thought, but anyway, I thought of it earlier, which is that,

It's confusing to people to talk about things having multiple essences because we tend to think of the essence since Plato is what permanently endures, right? Like the essence of a circle is the perfect form of a circle and all the circles in the world are just imperfect instantiations of the form of a circle. But for Heidegger, we have to understand essence not in that way as what permanently endures, but instead as a kind of a verbal essencing or unfolding and transforming through time.

So for him, essence of, and this is to reject the thinking of essence from Plato through Peripto basically, and replace it with a much more historical understanding of essence, which I think Hegel is the one who takes the way for this historicization of nature where what it means to be, I mean, Goethe before him, but what it means to be something, to understand it, you can't just look for its fixed genetic core or something, but in Goethe, you gotta like trace the life cycle of the plant through this.

Iain Thomson (12:10.352)
emergence, blossoming and decayed. understand what the plant is, not just the fruits. Unchanging, fixed nature.

PJ Wehry (12:18.542)
I'm doing a series of my own, starting soon on Charles Taylor's, The Language Animal. And I'm going through the sources of it because I don't think anyone wants to hear just my random opinion on Charles Taylor. thought it'd be helpful to like kind of go through the background of it. And I'm reading Herder and Humboldt. So when you talk about the historical, like the historical nature, yeah. I was like, ah, this is very familiar. Yeah.

Iain Thomson (12:38.458)
Hair to... yeah.

Iain Thomson (12:44.742)
And they're both very important for hiding. A herder is sort of the, mean, Charles calls him a romantic expressivist. And it's the view that basically language expresses individuals' ways of understanding their reality. And then when that catches on, when you come up with a way of putting something that other people find insightful or useful, they take it over and...

So languages are built up in a way to reflect the people who use them insights into the nature of things, which means that different languages preserve quite different ways of being in the world. that's, you this is part of what people often ask like, why did Heidegger become a Nazi? Well, the Nazis were national socialists. So this idea of being a socialist, but nationalist at the same time, meaning like not.

think that we could get rid of the languages and have like Esperanto or something like that, something about positivist thought, but instead preserve the unique insights of German and French and English and all these other languages was part of the Herderian romantic inheritance. It's not, you know, it's great and it's terrible. It's here, it sounds right-wing, I think in the contemporary context where the left tends to be cosmopolitan, but in Mexico where we're trying to preserve like Native American languages.

So that speakers of Dina, the Navajo language for example, there's fewer and fewer of them with each generation. get some AI issues about can you train AIs to preserve the language. These are not right-wing issues though. They're fascinating because it's part of the reason Dina has done comparatively well is the role of Playlist code talkers in World of Two, giving us this uncrackable code.

because the Germans and the Japanese had no access to the language of the Navajo people. It's split off from other languages. It's long enough ago that they couldn't algorithm it and figure it out.

PJ Wehry (14:52.802)
Yeah. Yeah. I think I'm going to turn you up just a little bit here. things I'm learning.

PJ Wehry (15:08.808)
And you were talking to, it's such a tension. feel reading herder and humble. I know I'm saying like, I'm trying to forget the German. I kind of work on my German, right? Yeah. Is the tension between like, on the one hand, there's this beautiful recognition of the beauty of each individual culture. And then all of sudden, like right alongside that will be like the most like, classic 19th century. like.

Iain Thomson (15:18.638)
Yeah,

Also very important.

PJ Wehry (15:35.298)
brutal colonizing thought like, okay, how do we separate these two? Because I know these are intention, right? But

Iain Thomson (15:43.078)
Historicity is real. have to tell this to my students, some of my students. know, like things really change. Things have dramatically changed in a relatively short, and that's pretty obvious now when you think about technologies, but still people are less aware of it when you think about like gender norms and like what it meant to be a man. And there's like a lot of things that have transformed dramatically in a relatively short period of time. And that has led to various.

PJ Wehry (15:46.722)
Yeah, yes.

Iain Thomson (16:10.054)
reactions right where people get anxious about like they're not being a correct answer come back to that but yeah Humboldt's super interesting too because in that first book I talk a fair amount about him because Heidegger's thinking about what it means to be in a university what the meaning of a university is and essentially the university was a German invention that was established as an alternative to the French model and

they were trying to figure out what it would mean to teach in a way that integrated teaching with contemporary research. So, you know, before that they'd been separated and there were like research academies and then places where you would learn the fixed doctrines about reality, which for a long time we thought we were close to finishing and like, you know, putting the final period on. And then when we had some serious breakers, we realized that we were nowhere close. And so this question of how do you keep a university unified?

as all these different sciences are branching out and getting ever further apart in their specializations, that was the big question that Heidegger was engaged in in 1934 as the rector of Freiburg University. How do you unify, what's the uni of the university? And I tell that story in the first book, this Heidegger on autopsy theology, which is a hard one, but that's the, I think the most influential book, because I get into this stuff about education that I...

been interested in my whole life and I'm working on especially in this text in the context of AI and cheating. Things that I hear a lot.

PJ Wehry (17:46.968)
Yeah.

Yeah, it's really interesting, continually in my own study, the connection between if you have a real emphasis on the historical and on historicity in your thought, going to have to make a serious place for pedagogy, right? Like the two end up intersecting a lot. That makes total sense to me.

Iain Thomson (18:13.988)
Yeah, say more about that. That's interesting.

PJ Wehry (18:17.39)
Sorry, I missed that last part.

Iain Thomson (18:20.206)
I said, say more about that. That's interesting. yeah, like you think, yeah, I'm going to interview you for a while.

PJ Wehry (18:22.126)
Same old... Well, yeah, My background's more in like, Goddomer and R'Kor. So when you talk about, I still think, I've heard a lot of different critiques of Descartes, but the best one I've heard is the one that Goddomer used from Vico. And especially having jumped from two to five kids. It's like the Cartesian system does not work with children.

I think is the most condemning critique there is. Cause if you can't set, if you can't give it to the next generation, then it doesn't actually work. Like, so yeah, I, that's, mean, that's Vico's, you know, 50 years after, or maybe a little more after Descartes is writing, maybe it's like, I think I got the numbers wrong, whatever. It's, know, 1500s and 1600s for, Vico. You have him writing and saying like,

Iain Thomson (19:01.667)
Interesting.

PJ Wehry (19:22.018)
You can't, you have to have ethics. You have to teach kids how to behave properly and you have to teach them proper methods for study before you break everything down into its tiniest particles and build it back up. Which is, mean, in some ways leads us back to the technology circle, the in-framing circle, right? Like,

Iain Thomson (19:38.406)
But yeah, like I would say, I mean, there were modern thinkers who tried to have a completely rationalistic system of ethics, know, like Spinoza. There's people who try to kind of start from a semi-Cartesian axiomatic point and then generate out, and Kant is the most famous, but he's starting with the subject, you just call that the rational agent, and then say ethics is about...

only acting on those maxims that every other rational agent can act on about contradiction and politics is about protecting the sphere of every rational agent to pursue their interests insofar as they don't step on the toes of all the other rational agents trying to do the same thing. That's basically the foundations of Western liberal democracy. So these are very influential and important ideas. The Heideggerian critique, Gadamer is a student of Heidegger, I haven't thought much about that because it sets interest, but the Heideggerian critique

that that Godin was learning from Heidegger was that, and this is connected to AI stuff, is that think about like riding a bike. When you're riding a bike, you're not doing anything you're good at skillfully. You're not like a subject standing over an external object apart from the bike you're riding or the book you're reading or whatever you're doing is an integral part of the self world unity that you are.

And that intertwinement of self and world, that skillful embodiment, that Dasein is, that our being here is, as an intelligible being, is not something that you can build up starting with Cartesian subjectivity. So part of Dreyfus's critique of good old-fashioned AI, that's what we're calling AI, where they're trying to build AIs by just filling a system of rules to follow, was that

That will never get there because expert human agents aren't following rules. Their learning is embodied responsiveness to complicated and ever-changing open-ended situations that for which, you know, this all, this does intersect with some cryptic things. There's no, there's no rule that can tell you, you get an infinite regress of rules basically that you would need. So the AIs can win, you know, in the, the

Iain Thomson (22:01.146)
finite domains like chess and go. And maybe, this gets interesting, maybe in things like a city or car, or specific, like a job site for a self-driving tractor or that kind of thing. But when something unexpected happens that they haven't been trained on, they're still just listening to a Wired podcast this morning about one of their experts riding in the latest generation AI cars and telling the story of like,

in the car. Oh, was much better than when they were first stopping all the time, you know, for the wrong reasons. But like a dog was running toward the road and there's a fence there to keep it from being able to run the road. But the AI couldn't recognize that dog won't be able to run in front of me because there's a fence. So it like slams on the brakes. So you're going a lot riding in the AI car, he's playing boom, dead stop. So and she's the person who's doing the podcast said, you know, if there had been a car behind us, they would hit us.

PJ Wehry (22:49.388)
Yeah.

Iain Thomson (23:00.614)
from behind you as a human driver would probably slow down a little more slowly knowing that other humans don't have instantaneous reflexes. And you've seen videos like this on the internet where like the car, that self-driving car goes overreacts to the human obstacle and causes an accident. So it's still interesting to see, you know, they're getting better, but will they ever get better than 70 %?

generously of expert level human agents? That's an interesting question.

PJ Wehry (23:35.95)
Well, I do think there is something there about like your but will they be better than the bottom 30 % of human drivers? And yeah, you're like, right.

Iain Thomson (23:43.654)
Yes, and they already are. They already are. So that's why like that's why it's catching on. Yeah, because like traffic is Traffic is largely just a magnified effect of small errors You know the person who slows down on the turn on the freeway when there's no reason to slow down And you get this echo, you know amplifying echo that leads to standstill on the freeway so but AIs don't do that You know, so you don't get that so they're more efficient

PJ Wehry (24:10.36)
Right.

Iain Thomson (24:13.232)
But you know, then there are the situations when some unexpected thing happens in emergency vehicles, or whatever it is that they haven't yet been trained on. And, or there's a funny one, I remember seeing a video of the two self-driving robot taxis, where they were waiting for each other. And one of them is trying to go around the car that's double parked, and the other one's coming at it.

And it's sticking out, someone coming out, can't go and it can't go because everyone's sticking out and they're just backing up and all the robot actors are backing up behind them. And people start videoing it like, what's it going to do? It's kind of a fascinating, fascinating puzzle. Yeah. But as you say, I mean, people always think their states are the worst drivers. I saw something that claimed that New Mexico genuinely was and coming from California, which had like fast, but angry drivers who shoot each other when you

PJ Wehry (24:50.233)
yeah.

Iain Thomson (25:08.358)
Don't use your turn signal back in the 90s when I was there. Here the problem was people getting onto the freeway going like 35 miles an hour. It was like over cautious slow bad drivers instead of like very fast 18 year olds with no sense of their own mortality, developing their skills actually. And sometimes in disastrous consequences. But we do have that here.

PJ Wehry (25:21.112)
Yeah.

PJ Wehry (25:34.082)
You know, haven't done a lot of driving out in California, but I will say, I just drove in Boston. took my family up there to have the kids see snow for the first time. And I lived in Chicago for awhile. And I will say, think Florida, Orlando is worse and it's the combination of you have, actually a large immigrant population from, Columbia and Brazil and they have different driving laws. so it's like they, they behave differently than you.

would expect, right? Like you have like, you have some people have like an American. Good. Yes, it's very much.

Iain Thomson (26:03.398)
This is the Charles Taylor, the herder problem. There's a culture, right? yeah, think of like Italy. What are lanes? Lanes are an inefficient way for cars to move.

PJ Wehry (26:12.94)
Yes. Well, ride along with that though. But that right along with that are the tourists. Like, so I mean, I would be driving down I-4, which I think actually just be the last couple of years was the most dangerous highway in the United States. And I would be going home. It's a 10 minute drive, you know, if no one's being crazy, but it's, it's where Disney is.

Like all of a almost every day, someone would cut across all five lanes of traffic to get to the exit. And there's just like, you don't have that experience in a lot of places where it's all of a like a common people like, I don't know where I am. on vacation. And so you just mix it all together. And I'm curious. I've heard terrible things about LA. So I don't know, you know, I haven't been able to compare that.

Iain Thomson (26:54.948)
Yeah, that's interesting.

LA is terrible because it's bottleneck traffic. So there's so many people trying to get places that there's like rush hour traffic, eight hours a day. So I think that's the problem. It's a car culture and there's so many people. The right has demonized California, but I don't think anyone's voluntarily leaving California. It's actually a great place to live, but some of the downsides are traffic.

PJ Wehry (27:10.798)
Yeah

Iain Thomson (27:28.038)
And expense right yeah, yeah exactly they want LA's beautiful and interesting There's tons of like creative artistic stuff going on my brother lives there, so I'm there for him. But it's It's also the one place I've crashed a car in my whole life Yeah, yeah, and I but I fell asleep so I was I was doing like an eight hour drive

PJ Wehry (27:28.738)
Yeah, well, a lot of people live there.

PJ Wehry (27:42.463)
okay.

PJ Wehry (27:47.118)
Okay, that might be a small strike.

PJ Wehry (27:54.666)
okay, yeah.

Iain Thomson (27:57.446)
I hadn't gotten a copy yet and it was stupid. But yeah, that was another story. But I woke up and had long enough to decide which car that was dead stopped as I was going 60 to smash into it. And it turned out I rear ended a priest, which is probably Carmichespis. But he was super nice about it. Yeah, yeah, me too. I'm glad I went with him.

PJ Wehry (28:17.068)
Well, we're glad you're safe. Yeah.

PJ Wehry (28:23.01)
Yeah. So I did want to ask, returning, think to the, talked about the nothing technological, you know, just kind of removing it's like, okay, we're not gonna have this circular definition talking about how to grow. And then we had the metaphysical one and then we had the will to power one, if I heard you correctly on that. then for the will to power one, is that connected to, is that, is that connected to Heidegger's, concept of at hand?

Iain Thomson (28:44.836)
Yeah, we never got to the third one.

PJ Wehry (28:53.09)
That idea of just like, it's always at our disposal instead of like existing in itself.

Iain Thomson (28:59.942)
They're connected in a complicated way. you're thinking of the four Honda night I would call the app hand and the two Honda night I would call the on hand. So there's the equipment all as Macquarie and Robinson translated and then President hand and ready to hand basically is the Macquarie and Robinson. One of them is the skillful engaged mode of coping with things that you're good at like using hammers when you're hammering pens when you're writing with them, cars when you're driving them.

enough fashion. And the other is that the disengaged thematizing objectifying relation you have to stuff that you're not in that skillful engaged mode with and that's the present. And in his view Descartes gives us an ontology of the present and the subject object divide and the representational commerce between subjects and objects. But you can't build the ready to hand the Suhantan.

out of the present, the 400. You can't build, you can't get at what it's like to ride a bicycle by staring at a bicycle as an object of properties. You can't build that back up out of it. So, but you can go the other way and get from riding a bike skillfully to what it's like to stare at a bike with properties. So that's the being a time story. The first half of being a time is this critique of Cartesian dualism and the ontology of modernity that's embodied in that.

Later Heidegger is dramatically different than early Heidegger and precisely because he historicizes ontology. That's the history of being. says rather than claiming that the equipmental mode of relating to things, the ready to hand or the two hands, is what philosophy has always forgotten. This is the view that Hippen tells in his Pulmination book. He's my teacher in my first year in grad school.

And it's the story that Tom Sheehan tells, that's what Kevin's getting. As if the forgottenness of being in later Heidegger was like the forgottenness of already being. That's not Heidegger's view. Heidegger's view later is that we forget being means that the entire metaphysical tradition has understood what it means to be in an epochal succession of at least three different ways.

Iain Thomson (31:22.116)
the pre-socratic, there's the ancient, the medieval and the modern basically, these three different ages. And the ontology presented in being in time in the First Division, the ontology of ready to hand, is itself needing to be situated within the history of being. If you do that, if you say, how would later Heidegger situate this understanding of all things as fully showing their being when they're using them skillfully.

How would he understand that later in the history of being? The answer, I think, is he would understand it as sort of the penultimate step on the way from, on the way to nihilism, because it's in a way worse than subjects and objects, because now stuff is nothing but what we use it for. And you can complicate the story, but he says things in being a time, like the forest is nothing but a timber, and the mountain is like a stockpile of the rock and all that stuff.

He's got a very un-environmentalist view. mean, Heidegger is a very famous influential figure for environmental ethics, but that's later Heidegger, not early Heidegger. So the later Heidegger's view is when we forget being, we're forgetting something like the condition of possibility or the Ur-phenomenon, the source that all the different metaphysical understandings of being were made possible by, but that none of them exhaustively captured in their...

metaphysical frameworks. So it's complicated, but the very last stage in the history of being that we're in is the late modern. And that's the Nietzschean one that understands being or isness, what it means to be anything at all, as nothing but an eternally recurring world of power, and thus reduces all things to pristang, meaningless resources standing by to be efficiently optimized. In that view, that's the technological, what it calls the technological.

He does as I as I mentioned but never fully explained never explained at all I think there's a third understanding of the essence of being which for him is the most crucial and the key to developing free relation to technology so we can use it without being used by it. He's not a Unabomber guy who we should give up technology but live in the woods. Go back to outhouses. He's got a view that we have to see that the historical danger of technology that drifted.

Iain Thomson (33:49.382)
the current that's pushing us in. It's not neutral. idea that technology is neutral is one of the things he critiques. Guns aren't neutral. They don't make good paper weights. They're not good toys for your kids. They're very good at shooting people and things, right? They're made for purposes. Anyway, but technology itself is not neutral. It pushes us as a dramatic expression of this technological understanding of being. It pushes us toward

PJ Wehry (33:58.829)
No.

Iain Thomson (34:17.242)
this reduction of all things to meaningless resources to be optimized. And in so far as we understand everything we deal with, and ourselves increasingly as nothing but meaningless things to be optimized, we find ourselves living lives that don't matter to us. It's like that's nihilism. Things become less and less meaningful. We're increasingly dissatisfied with life. And I think we see that all around us, the weaponized nihilism of school shooting.

this expression of despair at the state of the world. I think, you know, there's a lot, we can trace a lot of stuff back to this. I'm a firm believer that nihilism is the problem. I know on the internet today, there's a lot of people who want to call themselves nihilists and think it's cool, you know, or something. but it's, you know, if nihilism in its original sense in Nietzsche and Heidegger, they mean different things by nihilism.

PJ Wehry (35:04.334)
Like, yeah.

Iain Thomson (35:14.918)
In both cases, it's like light doesn't matter. It stops mattering. And you might think, well, that's great. You could be bad at it. That's great because that means I have this incredible lightness. can do anything I want. OK, but anything you want stops mattering to you after a while. And it doesn't make your life through it. It has an importance or significance. That's the problem. So this technological understanding of being, he then thinks.

for us to get out of it, try and, we have to understand it, understand the drift that historical direction's pushing us in. And that gives us kind of prophetic crystal ball to be able to understand where technology's moving. I that's part of the reason we still read this whole Nazi, know, 50 years later is that his critique of technology was very prescient. It did powerfully anticipate the developments that we're only seeing now, you know, that couldn't possibly.

literally predicted, but was getting at by reading how metaphysics isn't just some abstruse concern of philosophers and their ivory tower, it's the history that we are, because it's what I call ontological wholism. Everything is, so if you transform our understanding of what isness is, what it means to be, you set off a transformation of everything.

That's what metaphysics does. It stabilizes an age's understanding of what it needs to be. So when a new metaphysics takes shape and begins transforming its age, you can predict to some degree what direction things are going to move in. And that's what Heidegger's doing. He's reading nature like a crystal ball of the direction things are moving.

I think so the deepest essence of technology stuff we can get into if you want and it's important but it's that's what Goddomer himself and that know bad him or talk about like they could never understand it was this idea that in framing is the photographic negative of the no man you know where the danger is the saving power also grows this stuff that like that's this this crucial gestalt switch we have to make

PJ Wehry (37:02.435)
Yeah, I

Iain Thomson (37:26.694)
to sort of be saved, but the very thing that endangers us can save us if we don't understand it more. That is its view, and I try and explain that in the book. I think it's kind of a great exponent. The one book back was Heidegger's Postmodernity, which was articulating this view of his understanding of art and showing how that was crucial to his positive vision of how we understand being in a postmodern, how we get out of and beyond late modern.

nihilism of technological framing and what that means. A lot of people think he's this quietistic figure who's waiting for another God to come and save us. I think that's all confused. That he thinks the postmodern understanding arrived a couple centuries ago with Van Gogh and Herdland and all these things. Once we understand it, we can ourselves begin to live more meaningful lives. Life's more full of meaning. Basically cultivating and developing practices that encourage rather than...

dissipate that can help us build better lives and worlds, future more meaningful to us.

PJ Wehry (38:34.658)
Yeah. If I could, I've said it on here before. if I could just make a quick comment. I'm in central Florida. So if you bring up guns and you talk about the instrument, like that, it has a purpose to it. People get really, you know, it's central Florida. they're like, what are you talking about? But, but, and I, I've never had anyone, another example that's less emotionally charged, but

Iain Thomson (38:46.852)
Yeah.

Defensive. Yeah. Yeah.

Iain Thomson (38:59.878)
My students do too, by the way. Yeah, it's just Florida. There's a lot of hunters here. And I acknowledge that guns are cool. They're cool devices, you know? But they're not neutral. I'm not gonna use one. Yeah, I'm a man. I get it.

PJ Wehry (39:03.694)
Yeah. But the exit. Yeah. Yeah, I've watched Clint Eastwood. No, I. Yeah.

Iain Thomson (39:19.686)
No one can wear one as a hat.

PJ Wehry (39:22.418)
Right, right. Well, and that's the example I always especially in central Florida, though, is I use is like, there's a huge difference between if you drive a van and if you drive a truck. And you know, it's like they have different purposes. And you know this, because if you drive a truck, you will get asked to help with things, right? Like it's just built into like, and people are like, you're right. It's not neutral. Like, no, it's really not. So just another example what you're talking about.

Iain Thomson (39:38.47)
That's interesting.

Iain Thomson (39:44.932)
Yeah, is that the analogy? Is that the analogy between like a pistol and an AK-47 or something? Yeah, yeah.

PJ Wehry (39:51.638)
Yeah, like a hunting rifle and like a yeah. You're like, is that really a hunting rifle? I don't think so. But, man.

Iain Thomson (39:59.044)
Yeah, interesting. Well, you know, and the logic of in framing pushes us toward more efficient and optimal weapons. And there's different ways of understanding that one way of understanding that is non lethal weaponry. There's opening here in Albuquerque about non lethal weapons. And you know, the military has looked at things like laser weapons that would blind the enemy, you know, take away their capacity to fight. But they rejected that because of a

PJ Wehry (40:06.936)
You're right.

Iain Thomson (40:28.294)
ableism built into the idea that actually blinding them is worse than killing them. That it might be better to be undead than to be, this is too cruel. But some of the technologies have been used in the military because we have put so much money in the military and they have more technology than they can use, they pass that down to the police. So you've had this militarization of the police and the police use a lot of these non-lethal technologies on protesters and getting shot with a rubber bullet.

unlikely to kill you, but it's going to take you out of the out of the protest and the future protests and it hits you in the eye or something. You're really troubled. And then there's like water cannons and glue like there's all sorts of fascinating, you know, all the the forms of gases non-lethal, but if you're asthmatic like me.

PJ Wehry (41:03.746)
Yeah.

PJ Wehry (41:08.05)
well, yeah, I mean, that's the Kent.

PJ Wehry (41:22.292)
yeah. I mean, that's like, and that's the part of the in framing too, is that it, it shaves off the rough edges and it doesn't address the individual cases, right? It's like, well, it wasn't supposed to be lethal. Right. And they're like, I can't be held responsible because it wasn't supposed to be lethal. And it's like, you could have thought of that. Like,

Iain Thomson (41:32.134)
It treats the symptoms. Yeah.

Iain Thomson (41:43.642)
That's right. And I can't be held responsible is that makes sense in the military, right? Like, but thank God, you know, that that Russian submarine pilot, the second in command guy was supposed to turn the key to obey the captain's orders and launch the missiles when they thought the US was attacking, you know, didn't do it, or we would all be gone. So it is an interesting thing that the

PJ Wehry (42:09.4)
Cheers!

Iain Thomson (42:11.898)
disobedience to the chain of command, which is totally irrational. And they go very far to try and eliminate in well-trained truths, right? Occasionally people have gone against it. Nonetheless, we'll see what happens. This is part of it's going on today because there's a lot of people being fired. It could be those brakes on the system. And that makes you, it makes some of us nervous about what's being, what's coming next that after the brakes on the car taken out,

point that toward.

PJ Wehry (42:45.748)
as a kind of a concrete example, as you talk about, the will to power and, the eternal return of the will to power. I've had a lot of people to talk about on to talk about the medical industry and the food industry. And I mean, you could pretty much go down the line, but those two in particular, and the way that the, the switch to responsibility to the shareholder completely changes the product.

because it is seen purely as commodity and its values transferred to the dollar rather like, for instance, its use as food or its use as medicine for health. Is that a good example of what we're talking about?

Iain Thomson (43:29.826)
It sounds like it's not one I've studied in detail. I'm looking largely at the development of AI, so-called. And you can see it there too, because you get, you know, when OpenAI, that's a San Francisco company that was co-founded by Elon Musk and others, so he left when they started making dramatic breakthroughs and wouldn't let him run the company and took his money with them.

PJ Wehry (43:38.061)
Right.

Iain Thomson (43:55.018)
And they had a small crisis till one of the big breakers was self-coding machines, machines that could write computer codes that could write computer code. And that let them then branch off to Microsoft, which more than replenished their funding they'd lost when Elon went home and took his ball with him. But their founding mission, they're almost religious in their commitment to this mission, and there's lots of true believers, technophiles who are.

deeply believe in this mission, was to safely create a true AI, call it HEI, artificial general intelligence. And those terms are so controversial and getting used in different ways, but they mean like an artificial super intelligence will be able to save humanity. So they were, you know, they were looking for only another God can save us. They thought like they could create a kind of artificial God that would be able to save humanity from the entrenched problems that we haven't been able to figure out.

solutions to it. And I go through a lot of the details in the book and they're fascinating and interesting. But now there has been a deflation of what AGI or general artificial intelligence or artificial general intelligence means. it meant originally instead of it just being good in one area, it being good in a bunch of different areas and generally intelligent. And then there's super intelligent would be even smarter than us in a bunch of

in different ways. And the claim now is that basically anything that can pass the Turing test online, the Turing test as popularly understood was in order to get rid of our anthropomorphic projections, for us to be able to say that an AI has genuinely become intelligent, if we can't tell if we're interacting with a human or a computer, then it passes. know, Chattupti can pass that.

to some people, least there's people who believe that they're interacting with a genuine intelligence. And now, know, when Musk goes in and fires tens of thousands of people and replaces them with AI, that's what they're calling AGI. So they're saying that this true artificial intelligence is there now to do the jobs that these humans were doing a month ago. So that's a real thing that's really happening right now. And those things are not.

Iain Thomson (46:22.628)
really that good. They're at best 70 percent, that's a generous assessment, is that they're 70 percent as good as a human would be in the same role and it's probably more like 60 percent. But they're generously 70 percent as good. That means they're better than the bad humans, like the terrible human you sometimes get on the phone with, drives you nuts. But they're worse than the good ones that are usually the ones who end up resolving the problem that you have to keep calling back.

PJ Wehry (46:47.32)
Right.

Iain Thomson (46:51.256)
until you finally get the good one. So now the good person's gone, right? How this will all play out is going to be, I think, very problematic and lead to immense frustration that we'll probably end up rebuilding the bureaucracy eventually that they're trying to dismantle. It'll be hard because they're selling the buildings, right? They're not just dismantling the bureaucracies. They're trying to sell the buildings that the bureaucracies are in. They're trying to make it as hard as possible for the...

the system to be revamped, know, that's I'm doing, which is a very unconcerterative thing that, Matthew Arnold, the great conservative thinkers, basically, can, you can reduce conservativism down to, if it's, if it's not broke, don't fix it. Like there's a wisdom embodied in the past that you disrespect to your peril. You should change the system, the way, you know, you thought a doctor would operate on their father. Very careful.

go in and start working on multiple organs. they haven't, it's not broke, don't fix it. This is different. like, let's, this much more, the far right and far left are agreeing in a kind of revolutionary ethos. Let's destroy this thing and replace it with something better. The better is arguably worse, you know, but, AI is central for this vision. And it's about, you know, why would we pay humans when we could have a computer do

The same job or 70 % of the same job for 10 % or 5 % of the money. It's inefficient, it's irrational. It does not compute from a CEO's perspective. You were saying that in the medical industry. I would say in general, that's the problem is we allow this kind CEOification of the university, of government, of all our public institutions, then we are subject to the dictates that whatever

gets the most for the least. That's what I call the optimization imperative of technology. Get the most for the least. Whatever gets the most for the least wins. And that's a formula for meaninglessness and dissatisfaction. It makes sense in certain narrow technological domains, but when you apply that to important things in life, ends up, imagine like thinking, with my children, I'm gonna try and the most for the least. I'll spend the least amount of time with them, but make sure that

Iain Thomson (49:19.846)
When I do, we're doing like a reading to them, helping them learn to read or something. There's parents doing this kind of thing. It's a way to not really develop a loving connection to your children. It's not a good philosophy for having a meaningful life. It's a good philosophy for maximizing your income portfolio or something. But as you say, it's when you apply these, what I call the optimization imperative outside the domain in which it really.

makes sense, which is what technological and framing does, it ends up systematically eroding the meaningfulness of these practices that we've had. So part of this, I was talking to my friend B. Scott Rouse, who's a punk rock drummer in Oakland, he's a Dreyfus student and philosophy pitch. And he's working now, he just got hired to work full time on AI and...

we're gonna have to develop new practices for thinking about how to use these technologies in ways that are not subject to the technologizing nihilistic imperative of getting the most from these. Rather, how do we use CHAT GPT to help us cultivate and develop our sense of what's really there independent of our will and importance. And I do that in some degree in the book. I try and show various ways in which we we already do.

PJ Wehry (50:25.23)
Mm.

Iain Thomson (50:43.792)
that I think is important to realize and then we can think about developing new ways to do that in all these different fields. And I know people are doing this, know, the cutting edge people in music and writing. And there's a lot of people using AI, for example, in collaborative ways and thinking about it and trying to develop it and refine it in ways that will help use it without being used by it, know, to avoid the fate that I go through our fear of, you know, just becoming

tools of our tools. Terminator makers. Scenarios.

PJ Wehry (51:18.509)
Right.

PJ Wehry (51:22.87)
if you don't mind me asking, I want to be respectful of your time, but can you give me like a concrete example? you mentioned a little bit about musicians using it, but like, what's an example of someone using AI and then anti technologic technologizing way, which I think for most people, you know, that sounds like an oxymoron, right? Cause they're like AI is technology, which of course is what the first part of our conversation was about. So what would be an anti technologizing? I need to work on that word.

way to use AI.

Iain Thomson (51:53.99)
My brain instantly fills with examples, so I have to pick one. But the one that I did in the book was using AI to try and understand AI. And there you're limited by the fact that AI has like a ceiling. It can't do anything better than what it's trained on. In fact, it's probably never gonna be as good as what it's trained on. People think it's like, it won't be any better. I think it's probably not gonna be as good.

But, and what's, what it's trained on is, you know, growing and could be better. But people worry that it'll start getting trained on its own output and reach a kind of plateau that way. think there's good reasons to think that's overly optimistic, but that's not because there's so much more human stuff to be trained on. Like Cambridge University is encouraging all of us to sign things, allowing the AIs to train on our work, which would make it better, but also, you know, AI is like a playdress.

It takes everything away from it will say like someone out if it's trained on my work and then someone asked it What is hiding the mean by it? What was heidegger thinks wrong with technology? It'll say I'd agree things technology it reflects the Biases built into the metaphysics of our age specifically our Nietzsche and onto theology reduces everything eternally recurring power will to power and when you reduce everything that nothing but forces competing with forces

you get a view of everything as meaningless stuff waiting to be optimized. That was me. could, you know, that's 30 years of work, but it will just push that out as, you know, the truth, which from, you know, like, there's a sort of a weird non-capitalistic depth of the author view in which you should go, great, that's, that's increasing the pool of knowledge, right? On the other hand, I also live in this capitalist economy and would like a little bit of recognition for my life's work. So

PJ Wehry (53:32.259)
Yeah.

PJ Wehry (53:47.31)
Yeah

Iain Thomson (53:50.424)
It's an interesting puzzle, but that's an example. think we have other examples like you're doing it right now using some pretty advanced technologies to try and better disclose and disseminate your sense of what matters and what's significant to you, right? Why do you focus on this topic rather than the other? Every time somebody writes a paper, they're using like word processing software that wasn't available to the Heidegger's and the...

and the day carts to organize and develop their thoughts about what mattered to them in the text that they're reading and the issues that affect them and that they care the most about. Think of like using cutting edge, you got kids, right? So think about like the advances in child safety in your cars and this kind of thing. And even just the self-driving cars being safer in general, even if like the...

The accidents are terrifying and terrible, but in general, right? It looks likely that they will bring down traffic fatalities and, you know, it'll be interesting to see if like here we have like a real racer culture where these guys love to, it's old guys, mostly guys, you know, they go 150 miles an hour down the road and they're on the back wheel of motorcycles and stuff. Will that culture be able to survive a culture in which, you

I suspect yes, because America is so car centric, there'll always be people keeping around with the technology. So who knows, will it be rich or poor? So there's lots of ways in which we use technologies without being used by them. But the danger is that if you don't think about it, like so you're a student in like a freshman, I spoke with the first.

The first time I gave this, what became this book as a talk was at Franklin and Marshall College when my friend Stefan Kuiper generously invited me. And I was speaking with, they're a wealthy private school. They're not like super Ivy league. So they attract, you know, lot of students who are paying full tuition from other countries who are maybe couldn't get into the Harvards and Princeton and Yales and stuff, but are still paying a lot of money.

Iain Thomson (56:09.536)
and but English isn't their first language. So when they are engaging with Chachi Puthi, it looks extremely good, right? Like the level of the writing that's coming out of Chachi Puthi or its hill. They don't just move them out, right? They are, but Chachi Puthi is the one I use the most, have the most experience with. They have a sense that this is so good, you know, I'm here and it's here, right? Whereas when you get really good at writing, you can see that you're here and it's here.

PJ Wehry (56:16.782)
Mmm.

Iain Thomson (56:36.826)
But it takes a lot of hard work. Think of being a driver. You're a bad driver. Those self-driving cars are here. Once you're a good driver, they're here. And I think this is this general problem that the danger isn't that it's going to become the super intelligent thing that either saves or destroys us. think those are, on the biggest scale, those are fanatological fantasies, religious, displaced religious views. The real problem is that it

we might get used to its level and lower expectations, not just about it, but about us to that level. So we'll look, you we'll be perfectly happy being writers who never get beyond this point because we can use chat GPT or whatever to be a little bit better. And that's fine. That's good enough. That's a good enough philosophy. You'd be a good enough writer to like do okay in school and then you're, but, that might be fine if you're going to go and be an engineer or somebody who doesn't care that much about writing.

If you care about writing and like this composition of the soul and this transformation of the self and this basically deep philosophical themes, it's alarming and dissatisfying. And I will say, though, I dismiss, you know, the fear of AI takeover and I trace the history of that. There are real fields like think all the people getting the, we mentioned this briefly earlier, when we were talking before about people just getting the notice that they're fired from a

an AI in the mail. You know, there's people losing their jobs to this stuff right now. And it's not just truck drivers and taxi drivers. It's people who had the American dream jobs. They worked hard, did well in school. They got a good high paying job at health insurance and retirement. And they were raising families in Washington and other expensive metropolitan cities. And now all of a sudden they have no job. This can be a kind of devastating destruction of your life in a way that

AI and like these visions of dystopia kind of can you can take as a metaphorical or symbolic representation. So I don't think you know we're really going to get the super intelligent AIs. I don't think it's we're anywhere near that because it and you know I talked about this in the book but there's two of the three people who came up with the deep learning algorithm that was responsible for the later you know that led to the advances that were in.

Iain Thomson (58:59.782)
large language models and in self-driving cars. They got the Turing Award, which was like the Nobel Prize for computer science, but then they also got the Nobel Prize in physics this year. So these are very highly recognized people. And two of them are worried that basically human life is, the end is nigh. Jeffrey Hinton, who's the most outspoken of the three, he was the head computer scientist for Google. He said, know,

Enjoy your life now because you might not have much time left. Basically like kiss your ass goodbye, you know, enjoy these last few years of human brain and I deconstruct that and show that he's, he doesn't quite know what he's talking about because you know, as a super smart math guy, he sees even smarter math like computer things and sees himself being replaced. But really they lack something very fundamental to what

PJ Wehry (59:33.378)
Yeah.

Iain Thomson (59:56.398)
what's led to human-like intelligence and that's caring about things. Things matter to us. Nothing matters to them. They don't care about anything. So there's no, as I say in the book, they're not a Dasein. There's nothing that it's like to be a large language. They don't have a world. And they're not a Dasein. My friend was saying something and I said, I jokingly said, maybe, my feminist friend, said, well maybe they're a Mazine. And then I thought, well maybe if they're like a gaping Mazine.

They have a gaping absence where a self should be. And it's that gaping, when you interact with them for long time, I think part of the anxiety people have and that people have written about is that when you interact with this thing, that's like this kind of amazing know-it-all that's 70 % good about everything. You can ask it about, know, it does, is completely ignorant of its own ignorance and all this. You get this sense of like, I've never interacted with something like this that wasn't a person.

where there wasn't a world on the other side of this. But this is one, you this is the Chalmerian zombie, that there's nothing that it's like to be it. And that's a very uncanny thing. It knocks you out of the world that you've been used to. So I think out of that reaction to the, this plonking sense that there's no there there when you're interacting with them, people have overcompensated and projected a kind of intelligence and world onto these things that are really, you know.

souped up information delivery and prediction machines. And I go through some examples of both really to me impressive things that they've done and really kind of amusingly terrible things that they've done. I have a dark sense of doing this. They're more amazing to me than to the people they have. They're wild stuff. So, and I find, you these topics are my students are really interested in. think

PJ Wehry (01:01:38.712)
Yeah

Iain Thomson (01:01:50.662)
If you're a student and nobody tells you, know, education is a chance to really transform your soul, as Plano said, to like discover your talents and skills, what you're really good at, what makes you distinctive as an individual and develop that in a way that lets you contribute to what you care about in the world and thereby live a meaningful life. If somebody says that, that's what education is. If somebody says that to you, that can help because otherwise you're

PJ Wehry (01:01:59.8)
Mmm.

Iain Thomson (01:02:20.41)
we're raised in this optimized optimization imperative where under the technological and framing that we're all in, everything's just about getting the most release. So we think of education as like just one more place to try and get the most release. So of course I should turn in like a chat GPT paper if that's less work and can get me a satisfying grade than doing the hard work myself. But doing that means, you that you miss the chance if you don't use it in the right way to cultivate and develop.

the essential traits and capacities that make you the person that you are and that will enable you to actually engage with the world in a way that would allow you to have a life that you experience as meaningful. So I think awareness, education in the sense of like, let's understand historically where we are. in this technological age and here's its dangers. You've only scratched the surface of those here, but you know, and then we can start thinking about how to use it in ways that

Don't just get caught up in that. First of all, don't just get caught up in that. then second of all, help us envision more liberating possibilities on the far side of it. And I talked about that in the book.

PJ Wehry (01:03:32.066)
Yeah. Dr. Thompson, you know, I want to be respectful of your time. So I appreciate you coming in today. I appreciate you talking through this. Absolutely fascinating. So much to think about. Really appreciate you coming on today.

Iain Thomson (01:03:47.13)
Thanks PJ and thanks for having me. Appreciate it. Okay, I'll talk to you soon.

PJ Wehry (01:03:50.082)
Absolutely.

PJ Wehry (01:03:54.392)
Real quick before I stop this.