In The Tank

AI is everywhere; from the silly videos you see on Facebook to the articles you read online, even within hospital systems and publishers. Where is the line? Where is AI useful, and where is it more trouble than it's worth?

The Heartland Institute team will play "AI or not?", review the absolute mountains of AI-generated content that Amazon is allowing on their publishing platform, and we'll discuss what role government plays in all of this, where intervention might be necessary, and where it will only make things worse.

On UNHINGED: Scammers are now using AI face and voice filters to fool people, and scambaiters have tests you should be aware of.

The Heartland Institute's Linnea Lueken, Jim Lakely, Donald Kendal, and S.T. Karnick will talk about all of this and more on Episode #535 of the In The Tank Podcast.

Join us LIVE at 1 p.m. ET on YouTube, Rumble, X, and Facebook.

Visit our sponsor, Advisor Metals: https://climaterealismshow.com/metals

Creators and Guests

Host
Donald Kendal
Donald Kendal is a research fellow for The Heartland's Socialism Research Center, host of Heartland's In the Tank Podcast and Stopping Socialism TV, and a talented graphic designer.
Host
Jim Lakely
Jim Lakely is the Vice President and Director of Communications of The Heartland Institute.
Host
Linnea Lueken
Linnea Lueken is a Research Fellow with the Arthur B. Robinson Center on Climate and Environmental Policy at The Heartland Institute. Before joining Heartland, Linnea was a petroleum engineer on an offshore drilling rig.
Host
S. T. Karnick
Senior Fellow and Director of Publications for The Heartland Institute; Editor of The American Culture (https://t.co/h2pi2B2d7T)

What is In The Tank?

The weekly flagship podcast from The Heartland Institute features in-depth policy discussions connected to current news. Host Donald Kendal leads the discussion with the usual crew of Heartland Institute Vice President Jim Lakely, Socialism Research Center “Commissar” Justin Haskins, Editorial Director Chris Talgo, and others at this national free-market think tank. The entertaining and informative discussions often hit topics such as the environment, energy policy, Big Tech censorship, the troubling rise of socialism, globalism, health care, education, that state of freedom in America and around the world, and much more.

This podcast is also available as part of the Heartland Daily Podcast, the “firehose” of all the organization’s podcasts that take deep and entertaining dives into public policy.

Linnea Lueken:

Okay. We are now live. Welcome to this special AI centered show, everyone. We are introducing a game today that you get to participate in. Is it AI or is it real?

Linnea Lueken:

We will all embarrass ourselves, I'm sure, with the content that producer Andy has put together for us. Also, Amazon is getting getting flooded with AI slop that bears remarkable resemblance to the book writing machine content from Orwell's 1984. Do we really just want endless tides of new content to consume, or do we want something more? And we'll talk about how to find the balance between innovation and regulation. Should there be any regulation on AI technology?

Linnea Lueken:

Will the government just make things worse as it often does? And for Unhinged, we are going to show you a scammer using an AI face filter to try to fool people. But unfortunately for him, the person he tried to scam was cybercrime investigator, Jim Browning. And it's not just a technology episode. It's about where the line is between optimism for the future and spiraling into Dystopia.

Linnea Lueken:

We are going to talk about all of this on episode 534 of the In The Tank podcast. 535. I did not mean five thirty four. Five thirty five. Sorry about sorry about that.

Linnea Lueken:

Alright. Welcome to the In The Tank Podcast. I'm Linnea Lueken, your host, who is choking on my own breath. And as always, we have Jim Lakely, vice president and director of communications at the Heartland Institute, Sam Karnick, senior fellow, and as our special real intelligence, not artificial, I think, Donald Kendal, director of the Glenn C. Haskins Emerging Issues Center, brilliant designer of nails, and all heartland graphics.

Linnea Lueken:

And we

Donald Kendal:

also the director of awesomeness. I think you gotta read this.

Linnea Lueken:

That's right.

Speaker 3:

I'm

Linnea Lueken:

sorry. Sorry. I didn't have the updated title in my notes. We also have producer Andy Singer in the background keeping the show moving along, and he also designed the game for us today that we are going to play. We'll get to that after our unhinged segment.

Linnea Lueken:

Before I say hello to you guys and before we get started, I want to remind our viewers here because it looks like there's quite a few of you in the chat already. If you want to support the show, you can go to heartland.org/inthetank, and you can donate to us there since YouTube does not let us monetize this channel. Please also click the thumbs up to like the video and remember that sharing it helps to break through some of that suppression. And even just leaving a comment helps us too. If you're an audio listener, you can help us out by leaving a nice review.

Linnea Lueken:

Guys, how's it going? What's the what's the scariest AI thing you've seen in the news this week?

Donald Kendal:

Oh, we'll talk about that later. But I will say that, you know, when you when you said episode five thirty four, I think that's just proof that you're not AI. Right? So the AI doesn't make little mistakes like that. It's a very human thing.

Donald Kendal:

So we'll just say that was intentional.

Linnea Lueken:

Yeah. That was I actually really tried. I don't think I delivered the line very well, but I wrote in like, an AI sounding line into my introduction where I said, it's not just the technology episode, but I didn't deliver it very well. I meant to emphasize it, but I didn't I didn't nail the the delivery on it. So my joke fell kinda flat.

Speaker 4:

And it's hard to see Emdash's, in conversation.

Donald Kendal:

Good point.

Jim Lakely:

Hey. Hey. Hey. Hey. Hey.

Jim Lakely:

I've been using the Emdash for my entire writing life. I'm a professional writer. I refuse to give up the Emdash just because people think I'm AI.

Donald Kendal:

You have to. It's like when we all had to stop doing the circle game because it started being associated with, like, Nazis or something like that. We can't do that anymore. We can't. We gotta drop it.

Donald Kendal:

We got to drop the em dash, Jim.

Speaker 6:

Yeah. Just look at the I have stopped

Speaker 4:

I've stopped using the semicolon and started using the the full stop and just make it a new sentence or use a colon and write it so that this second phrase actually explains the first.

Jim Lakely:

Man, this is an audience killing this is an audience killing conversation. Let's move on.

Speaker 6:

Yeah. State of that. People love to remember.

Linnea Lueken:

Yes. So that's what we're gonna yeah. That'll be the next special episode, grammar episode. Alright. So, guys, we're gonna get into unhinged here, which I have titled to catch a scammer.

Linnea Lueken:

Every once in a while, I like to remind our audience that there are a lot of scams out there, especially now over the Internet. And unfortunately, AI is making them a lot harder to catch. You can get phone calls where someone can mimic the voice using, AI of, like, a loved one or something and say, hey. I'm in jail. I need bond money, that kind of thing.

Linnea Lueken:

That's pretty common now. Here, I wanted to show you guys a clip from Jim Browning where he is talking to a scammer over a Zoom call. The scammer is from one of those basic scam companies that claims to be able to recover your funds after they were already stolen in a cryptocurrency scam. So they're actually just doubling up on their victims. Fortunately for us, Browning recorded this interaction and posted it to YouTube shorts.

Linnea Lueken:

So please go watch his channel. He's a great educator on this stuff. And also I just I really enjoy people who screw with with scammers and stuff. It's called scam baiting. It's really fun to watch.

Linnea Lueken:

Alright. So we will link in the description. But it's important for us to spread awareness on how these scams work, in an AI episode like this and how convincing they can be. So if we have the video.

Speaker 7:

You're not some AI thing or what what what how would I be able to tell?

Speaker 3:

Here, man.

Donald Kendal:

Here. Oh,

Speaker 7:

is that sorry. Can you hold that up?

Speaker 3:

It's not AI.

Jim Lakely:

Can you Yeah.

Speaker 3:

Of course. But

Speaker 7:

that's a

Speaker 3:

That's our director.

Speaker 7:

Oh, I see. I thought it would be your name.

Speaker 3:

No. No. No. That's our director. We have a multiple.

Speaker 3:

You see all of them here.

Speaker 7:

Oh, I see.

Speaker 3:

Go, Ronnie.

Speaker 7:

But but, you know, is there no way of telling whether you're real or not? Can you, like, hold up three fingers in front of your face or anything?

Speaker 3:

Oh, come on. That's too much.

Speaker 7:

I don't oh, you don't want to do that?

Speaker 3:

Well, I'll do that here.

Speaker 7:

In front of your face. What

Speaker 3:

do you mean? I mean, that's too much. Come on, Ronnie.

Speaker 7:

Why would it would it affect the AI if it was in front of your face?

Speaker 3:

Well, not at all. But Can you do that then? Too much to ask somebody.

Speaker 7:

Well, making sure you're not AI is not unreasonable. I mean, can you do that in front of your face?

Speaker 3:

Well, I think that that's too much to ask somebody. Don't you think that?

Speaker 7:

No. I don't think it is. If I'm making sure you're not AI, I think that's a reasonable thing to ask. Can you hold up three fingers in front of your face?

Speaker 3:

Well, I think that well, is that enough?

Speaker 7:

No. It's not in front of your face.

Speaker 3:

Well, that's too much, don't you think?

Speaker 7:

Yes. It is. Because, you know see

Donald Kendal:

his hair. Really His hair is twitching.

Speaker 7:

Fingers in front of your face. Yes. I thought so.

Donald Kendal:

Wow.

Linnea Lueken:

So that I mean, if you're not really, you know, examining it all that close or something, I could see someone falling for that. Or if you're using Zoom on your phone or something. Oh, yeah. And if you were just really stressed out. So the as I said before, the setup the setup for this is that the scammers steal your money by getting you to invest in a fake cryptocurrency.

Linnea Lueken:

And then later, they recontact you and they pretend to be investigators or, like, white hat friendly hackers who are gonna help you recover your lost funds for a small fee, and then they steal that money. So it's, I I wanted to share this with the audience because a lot of the times we can get this idea that, like, only stupid people fall for these scams. And I I don't think that's the case, especially since they're getting better and better at this stuff.

Donald Kendal:

Yeah. I I I totally agree. I mean, I had a similar incident, not for me, but somebody, you know, like a family member, where somebody was claiming to be me, and asking for, oh, I'm in trouble. You know? Can you send me some money?

Donald Kendal:

And, like, thankfully, they didn't do it. But, but, yeah, that's that's the level that it's at now. I mean, it's it's tricky. I I think, like, you know, maybe I have too much hubris to think that I'm not gonna fall for some of this. But certainly people in my family, that aren't aware of just the technological leaps and bounds that have been made in those directions, I think would get tricked by something like this.

Donald Kendal:

So I've actually made it a point to talk to them directly about this and talk about, like, you know, establishing a, like, a a keyword, you know, some sort of a a a term that, like, we would only know. And that if anything like this were to happen, if there's just an inkling of any fishiness, you know, bring that, like, key term up. And then the other thing that I wanna mention is, you know, even people listening to this. I mean, in in the office, we still throw around that joke about, like, AI, showing somebody with too many fingers or whatever. That hasn't been a thing in years.

Donald Kendal:

So if you think that, like, AI is gonna be that noticeably AI, because of, you know, something absurd like it have you know, a person having six fingers, that's yes, Andy. That's, we're not at that level. We have surpassed that by years at this point. So, yeah, you have to be very aware and, you know, literally talk to people in your family that you think might be susceptible for falling this, for something like this and make them aware of just how advanced this this technology has gotten.

Linnea Lueken:

Yeah. And these and these scammers also, you know, Google doesn't have any, you know, process whatsoever for filtering out scam websites. You can a scam website will pay to get their result to be the top of the search bar. And Right. And it'll direct you to, like, a fake Comcast website.

Linnea Lueken:

So it's all very sketchy. I just wanted to remind our audience, you know, the credit card company will never call you. The the like, DIRECTV is not gonna call you and ask you to give you them your your PIN numbers and stuff. They already have it. They don't need to call you.

Linnea Lueken:

It's it's pretty bad. I've got I get scam calls all the time. They're bad enough. I got one one time where someone was claiming to be a police officer, and they were looking for, like, bond money or something. And I just hung up the phone on them.

Donald Kendal:

James Bond money? Yeah. You know, another thing is, you know, we're we're in the kind of office space kinda sphere, and we're not, you know, like, we see a lot of these, like, phishing emails, scam emails. And the ones from a decade ago, you know, when I was first at Heartland, those were the most obviously identifiable things in the world. It would be a terrible spelling, the worst grammar in the world.

Donald Kendal:

You know? It's just so obvious that it's a scam. The ones that are coming through now, it's like you have to look at it a few times and make sure that, like, no, this is in fact the scam before deleting it, you know, tossing into your garbage folder. But, yeah, again, just a few short years and these scams are getting way more advanced.

Jim Lakely:

Yeah. It it's crazy. I mean, I I actually know that it's a real email from, like, say, Sterling Burnett, our our head of the director or director of our climate center here at Heartland. I know it's really from Sterling because because the sentences are overly long and not punctuated, afraid, and, you know and I know this is a real text for my sister when she doesn't capitalize a single word. Every period starts with a new sentence with a lowercase letter, so you have to look out for those things.

Jim Lakely:

But you're right. I mean, back in the day, you know, the Nigerian prince was not very good at at writing in the English language that an American would recognize. And I gotta say, I, you know, I am like a lot of people listening to us. We've seen them in the comments. I screen all of my phone calls.

Jim Lakely:

I am inundated with with spam, but also tons and tons of phishing, scams in email, and on my increasingly on my phone. My my cell phone number, as the nature of my job, is public, so you can find it on the website. And so now AI is crawling all these things and finding it. You know, I don't care how many times I hit delete and report as spam on my iPhone. It seems to have no effect because I keep getting more of them every day.

Jim Lakely:

But if I was, let's just say, I I wouldn't have been fooled by that video that we just showed at the beginning of the segment because I don't respond to any inquiries about anything in my life by any stranger. I know, you know, what accounts I have. I know what a proper I know that if it's really real, that it will come to me in a way that is is more verifiable. Unfortunately, AI is going to get so good that it's gonna fool it's it's gonna be able to fool people that are usually consider themselves pretty savvy when it comes to these things. That video looked a 100% real.

Jim Lakely:

When when he went back and reached back and grabbed that thing off the off the off the wall and held it in front of him, then the guy said, can you hold it closer, please? And he held it closer, please, so you could read it. And the words on it were real. They were spelled right. It looked somewhat legitimate.

Jim Lakely:

And then when he says put the three fingers up, and three fingers came up, although it was a little weird. What was it? Like this or something? Some kind of gang sign, I guess. AI gang sign.

Jim Lakely:

But, you know, it was that to me, that was super convincing. It's only that that guy I'm sorry. The guy who released the video, I forget his name you mentioned in the beginning, he had to keep pressing and pressing and pressing and came up with something very specific that he thought AI would not be able to accomplish. Frankly, he's just lucky he wasn't able to accomplish that. How hard would it be for for an AI thing to just do that?

Jim Lakely:

I mean Well,

Donald Kendal:

this was hard. This was probably a deep fake, like, filter.

Linnea Lueken:

A deep fake. It's not a fully AI generated person. So it's a real person in a real office with a fake face and, like, arms, skin

Speaker 6:

color on.

Jim Lakely:

Alright. I oh, that's right. Yeah. Because the the filter would be would be Distorted. Filter would have been distorted if he put his fingers in front of his face, like Yeah.

Donald Kendal:

If he did the John Cena thing in the

Jim Lakely:

Yeah. That's right. But but soon it's going to be that you're going to be on a Zoom call with an actual AI video person that is pretending to be a real person. So this is coming very soon. Look.

Jim Lakely:

We're all on Zoom calls all the time. Are you always looking at the screen and examining for human activity? No. You're glancing over. You might check your phone.

Jim Lakely:

You're not looking at that real carefully. This is this is pretty AI is turning out to be a, you know, the Wild West with an, you know, endless bonanza of opportunities for scammers. It's amazing.

Donald Kendal:

I'm I'm pretty sure half of our meetings, when producer Andy is in those meetings, it's actually just some intern with a deep fake Andy face on there just sitting and nodding along. I'm pretty sure. I haven't been able to prove it, but I'm pretty sure.

Linnea Lueken:

Is like, I'm working on that. Don't die.

Jim Lakely:

I know. He's working on that for sure.

Linnea Lueken:

The yeah. And it's the the stuff about the Google search is really annoying because you you should be able to you know, I should be able to type into Google, like, where could I buy cryptocurrency and know that the first place that will come up will be, like, a legitimate cryptocurrency website, but that's just not what happens. You know? And and to me, because I'm a Neanderthal and don't understand how anything works, crypto itself seems a bit sketchy to me. You know, I like to have things that I can really, like, grasp the value of, like smog the dragon or something.

Linnea Lueken:

I like a treasure hoard of sparkly things, like pre nineteen eighty two copper pennies and stuff. Things that won't become totally worthless if solar flare wipes out all electricity. But at In The Tank, that's why we trust Advisor Metals. We trust them above all of the other metals companies, and that's because we know at Advisor Metals, the person running the place is the absolute best of the best. A great friend of Liberty, Ira Birchatsky owns and is the managing member of Advisor Metals.

Linnea Lueken:

He has decades of experience in precious metals and is the only person in the physical precious metals industry who has the Commodities Futures Trading Commission federal registration. That's a lot of words. What does it mean? Well, it means everything that Ira or a member of his team says to you has to be factual. So there's no sketchy sales pitch or bait and switch.

Linnea Lueken:

He is held to the highest ethical standards, and there is full transparency, which is obviously way too lacking in other places nowadays, even Google search. So if you want to divert yeah. If you want to diversify your investment portfolio and your savings, if you're planning for retirement and are concerned about economic uncertainty, if you're scared of crypto and want a tangible asset that is easy to buy and sell like me, you can secure your assets with a wide range of physical precious metals by getting in touch with our friend Ira at Advisor Metals. Ira is going to make it very easy for you. Please visit climaterealismshow.com/metals, and you can leave your information for Ira and get started with investing in precious metals and expand your current portfolio.

Linnea Lueken:

Go to climaterealismshow.com/metals. And when you talk to Ira, make sure you let him know we sent you. That helps us while you're helping your financial future by diversifying with precious metals from Advisor Metals. Thank you guys very much. Alright.

Linnea Lueken:

So now that we've kind of had it out of the way that that there are scams that can target you with fake videos and and stuff like that, a lot of people still think, as Donald was talking about before, that you'd be you would be able to very easily tell the difference between an AI scam and something that's real. So we are introducing a game to embarrass all of us and disabuse all of us of that idea called real or AI. Do we have a drop? I think we have a drop. Maybe not.

Linnea Lueken:

Alright.

Speaker 8:

In a world where anything can be faked, Can you spot the truth? Tonight, it's man versus machine, instinct versus algorithm. One image, one voice, one chance to call it. Is it the real deal or a clever creation? Get ready to decide.

Speaker 8:

This

Donald Kendal:

Is Deal or AI?

Linnea Lueken:

Thank you. Very good. Alright. That's very exciting. Obviously, a also AI made drop that Andy created for this, and I thought that was great.

Linnea Lueken:

Alright. So we want the audience to participate here in this game. So to do that, you can we're gonna pull up a code on the screen. You can scan the q QR code or go to the link on the screen there. We promise it will actually bring you to the quiz, and this is not a scam website.

Linnea Lueken:

And you can put in a nickname and you can enter the game. The game is very simple, you guys. Guess whether the media we are showing to you is real or artificially generated, and we will see who has the best AI spider sense the end of the game and who will be the first to be destroyed when the Terminator apocalypse begins. So Andy put this together for us, and I'm sure, as I said before, that we are going to be very badly embarrassed because good AI content has become extremely convincing. But so let's let's get everybody into the game.

Linnea Lueken:

Yeah.

Jim Lakely:

I don't know if Andy if Andy needs a little bit more time to start the quiz, but I'm playing right here on my very phone.

Donald Kendal:

Yes. Me too. Okay. I wasn't sure if that's what we were supposed to do.

Jim Lakely:

Oh, no. No. Yeah. I'm gonna try

Donald Kendal:

to play on

Jim Lakely:

my phone. So if you and if you can't get the we encourage everyone to go over there. Again, it's not a scam. But, you know, to go over there and try to do it. And if not, you can just put your either thumbs up or thumbs down in the chat or, you know, yay or, you know, real or AI in the chat as well.

Linnea Lueken:

Yep. So we're gonna give Andy a second to get this together here. But I'm I've I've we've been wanting to do this for months and months. We've been talking about how much fun it would be to get the audience involved in a game like this. I think we even we're talking about it last year when AI, like, image generation was still just a little bit wonky most of the time.

Linnea Lueken:

But now it's gotten so good that we're just gonna probably be looking at a bunch of pictures of, like, real looking birds and stuff and Yeah.

Speaker 6:

Have no idea. Well, yeah. I mean,

Jim Lakely:

you know, we we can take our time here, actually. There's no there's no rush. We can do what we want on our show. But we've actually talked about replacing our vis visages here on the screen with avatars, like either cartoon avatars or something else, which would seem impossible a year ago. But I bet we can figure something out before the end of the year to, like, make the show, like, all manga style or something like that, it would be pretty crazy.

Donald Kendal:

Yeah. You know

Linnea Lueken:

make this an anime. No. We're not gonna be YouTubers.

Donald Kendal:

I I saw I saw some headline recently. I was trying to find it just in those last minutes here, but I saw some headline that was suggesting that, like, out of the the top 10 songs list, whatever that is, that, like, four or five of them were AI songs. And I know in the past, several months back, we talked about, the number one country song in the in the country was like an AI song. So, like, that wasn't just some anomaly. That wasn't just some one time thing.

Donald Kendal:

It's like becoming more and more the case. And it's you know, for better or for worse, I know that I don't wanna rehash that whole same conversation about human creativity versus, you know, AI, analog, all of that sort of things. But, this this is not just a flash in the pan. It's something that is staying and becoming more of a cultural cornerstone than just, like, some passing, you know, interest.

Linnea Lueken:

Yeah. Absolutely. And and I I do think, I don't know, I I have it. That's our next topic after the game here is going to be talking about the, like, just colossal amount of just total and utter slop that Amazon has been publishing because of AI novel generators, but we'll get to that in a minute. What's up, Andy?

Speaker 6:

Let's let's get it started.

Donald Kendal:

Let's do it. Alright. Alright. Round one, video. Alright.

Donald Kendal:

So we've got a car driving through a dusty sunsetting road through the desert here. Alright. Is there a button I can press?

Jim Lakely:

I don't think the quiz online is

Speaker 6:

ready. Keep

Jim Lakely:

saying I'm gonna hit refresh. It keeps saying to, waiting for the quiz master to start.

Donald Kendal:

Oh, here we go. Oh, alright. Your answer, real or AI? Alright. I submitted mine.

Donald Kendal:

What does everyone in the chat think?

Linnea Lueken:

My quiz has not started.

Jim Lakely:

My quiz has not started either. I say AI.

Linnea Lueken:

I say it's AI.

Donald Kendal:

I I was able to do mine.

Jim Lakely:

It is AI. Sam, you're muted. There we go. I don't know why I'm not in there. Well, we got some people in there.

Donald Kendal:

Oh, man. It does it by time taking to to make your decision. I gotta be quicker now. Oh, jeez.

Jim Lakely:

Unfortunately, I can't get

Speaker 3:

in there.

Speaker 7:

That's too bad.

Donald Kendal:

Alright. That's Andy.

Jim Lakely:

It's a picture of Andy, petting a tiny little miniature donkey.

Linnea Lueken:

I think this is real.

Donald Kendal:

No. No. It's definitely definitely not real.

Jim Lakely:

I think it's real.

Donald Kendal:

It's Andy and some guy wearing a green shirt.

Speaker 6:

Oh. 87% of people got it in. 93% got it in. Let's see what it is. What?

Speaker 6:

You

Donald Kendal:

That's not real?

Speaker 6:

That's not

Donald Kendal:

real. Hasn't that been your profile picture for, like, years?

Speaker 6:

That's I knew I could fool you. That one ends.

Jim Lakely:

You put an AI you you AI ed yourself.

Speaker 6:

It's the ultimate. Right? Self in a real picture.

Donald Kendal:

Look at that. Wow. I dang it.

Speaker 6:

Just wait for let's see how people did. Only two people got it. Wow.

Jim Lakely:

Well, they took some time to well, they may have said it while when when you said what you said, but I don't know. They they they took their time. Alright.

Donald Kendal:

But Alright. Oh.

Jim Lakely:

Well, well Whoops. Well, a hospital after this fiery crash

Donald Kendal:

in some I didn't answer.

Speaker 6:

This one's real. This is a

Jim Lakely:

video from last night. You see that car is on fire.

Speaker 6:

Apologies for that one. Alright. Here we go. I can wait.

Linnea Lueken:

This a video? I

Jim Lakely:

think that's a picture.

Donald Kendal:

B. Oh my gosh. I'm I'm just

Linnea Lueken:

My paranoia is immense. I would

Donald Kendal:

think that if

Linnea Lueken:

this was just on my feed on Facebook or something, I would think this was real.

Donald Kendal:

The psychology of this is messing with me.

Jim Lakely:

I say I say real. I've only missed one. That was Andy's.

Speaker 6:

Let's see. Let's see. 87% are in. I'll get a few more in

Jim Lakely:

a second.

Donald Kendal:

I'll get people ten seconds.

Speaker 6:

Oh. I should've got the Jeopardy music.

Donald Kendal:

Yes. Alright.

Speaker 6:

Let's see what we got.

Jim Lakely:

We need some background music for this. Let me try

Linnea Lueken:

to AI.

Donald Kendal:

Yeah. Good good drive night driving on. That's AI? Okay.

Speaker 6:

Yeah. People two points of the max. This is the last one, so this is gonna have to be the differentiator.

Donald Kendal:

Oh, no. Oh god.

Speaker 6:

I don't know.

Linnea Lueken:

I'm gonna say I'm gonna say this is AI because who puts Indian corn on their plate like they're ghastly eating?

Donald Kendal:

I think that would be something that Andy would do. I think he took this at a family dinner. Gonna I'm make it real. He made those. Whatever that is on the left, he's probably

Speaker 6:

gonna wanna show

Jim Lakely:

I think this is I think this is AI as well. I think that plate is does it's very convincing looking chicken, but I think it's Yeah.

Speaker 6:

Sal, we skipped one accidentally. The quiz page frozen so much? Sorry about that.

Donald Kendal:

Alright. This is also just a test run.

Speaker 6:

Alright. Yeah. This is the first time we've done this. I

Donald Kendal:

finished this. Fledged version of this at some point in the future.

Speaker 6:

Alright. And that is real.

Jim Lakely:

Think Oh

Speaker 6:

my god. It's it's saying

Linnea Lueken:

unicorn on his

Donald Kendal:

Oh, no. I still I still hit AI because I didn't believe the words that I was saying.

Speaker 6:

I I was that was all of them. Oh, wait. Let pull up the leaderboard. Let's do one. Mhmm.

Speaker 6:

Oh, route. No. Okay. No. There's no routes here.

Donald Kendal:

Bunch of three. I can do more. Sale. Oh,

Linnea Lueken:

I'm definitely dead.

Donald Kendal:

Way to go, Ray. Hey.

Speaker 3:

Yeah. At work at the Donald.

Jim Lakely:

You can't even guess who

Donald Kendal:

I am.

Speaker 6:

You know, that's right. Alright. I'll pull myself.

Donald Kendal:

Alright, Ray. Standby, Ray. You're gonna get a call from someone from Nigeria asking for all your personal information. Just give it to them, and you'll get your prize.

Linnea Lueken:

Yeah. And gift cards. Yeah.

Donald Kendal:

Gift cards.

Linnea Lueken:

Wow. Alright. I knew I was gonna suck at that because I I just there's just no telling. It's almost like a fifty fifty on that dog one. I've seen a million pictures of dogs licking whipped cream off of their nose or whatever just like that.

Donald Kendal:

Yeah. And you know what's another thing is, you know, this is kind of along the lines of, you know, somebody having six fingers in a in an AI picture is that AI wasn't able to do, like, text very well. You know, obviously, there's a large language models that can produce, you know, articles and books and all of that sort of things. But when you're actually generating a picture that contains text

Linnea Lueken:

Yeah.

Donald Kendal:

Sometimes that text is mangled and all of that, but it's getting so good now.

Linnea Lueken:

Yeah. That It still gets tripped up on, like if you have a, like, a city scene and there's a bunch of cars with license plates visible and stuff, the license plates will be kind of messed up.

Donald Kendal:

But Right. But it's getting it's getting to a point where people are sharing, you know, a tweet by, like, Donald Trump, which would have been easy to just Photoshop. You know? You just Yeah. Photoshop in the text or whatever.

Donald Kendal:

But now it's just wholesale AI, and the text in the fake post looks real. You know? So it's it's like I see these, noteworthy you know, it's like a screenshot of some noteworthy tweets that's sent to me, and I don't even believe it. I have to go back and check and find it and page through the thing based on the screen name to see if it's real at this point. I just don't trust anything.

Linnea Lueken:

Oh, I always do that too. Yeah. It's getting my my Internet paranoia is very high at this point. Okay, guys. We're gonna move along to the next topic here.

Linnea Lueken:

I hope that was fun for everyone. I thought it was pretty fun. And hopefully, next time it'll be a little bit smoother and maybe we'll have time for a few more pictures and stuff. But that was great. Thank you so much to Andy for putting that together.

Linnea Lueken:

Let's see. Alright. So topic number three, I called it the book writing machines. But so in George Orwell's 1984, the Ministry of Truth mass produces these state approved novels for the public to consume. The quote from the book is that books were just a commodity that had to be produced like jam or boot laces.

Linnea Lueken:

Today, this is done by AI in volumes that even Orwell's novel writing machines could not have possibly accomplished. There is an entire industry on TikTok where young women are eagerly consuming generated romance and fan fiction content that is, you know, fully AI written short stories and stuff. So Amazon has seen a massive sudden increase in new books being submitted and published on their self publishing service. It happened started to happen right around 2023. They went from about a 100,000 monthly releases, which hadn't changed over years to 200,000.

Linnea Lueken:

And then in by 2025, it was up to 300,000 new books per month being published. So we have a a chart from, I think, Lueken Lueken on Twitter for that. Yep. Right there. Alright.

Linnea Lueken:

And so there's an estimate as to how much of that has been generated by AI. And it seems that AI generated works probably contribute the vast majority of that increase. And so it's creating, you know, fiction novels. It's also creating nonfiction guides. There was a another, like, scam investigator guy who I watched a while ago who bought a, like, a native plants for food book on Amazon.

Linnea Lueken:

And as he flipped through it, he realized that it was entirely AI generated and it had information in it that was dangerously wrong. Like, descriptions of what a death head mushroom looks like that were incorrect and getting things mixed up and stuff that would lead people to death or severe injury. So you have to be careful. There are also book creating apps now that will write an entire manuscript for you. They're advertised on the front page of Google, PC Magazine, other tech publishers.

Linnea Lueken:

And so I came across this article at the conversation, which also thought of the book writing machines in 1984 upon researching this topic. They said thousands of AI written, edited, or polished books are being sold, an eerie echo of Orwell's novel writing machines. So this that's up on screen here is one of the PC Magazine's advertisement for an AI book creator. You know, they talk about how it's drafting formatting and metadata generation, but they also it also just straight up does the manuscript for you, which is I I'd find it gross, but we'll talk about that in a minute. Okay.

Linnea Lueken:

But at the conversation, the author writes, in 2025, the artificial intelligence company Anthropic, best known for creating the chatbot Claude, agreed to pay a $1,500,000,000 to thousands of authors after a judge ruled that the company had infringed on their copyright. When I first learned about the settlement, I assumed that Anthropic was primarily interested in teaching Claude about the subject of my stolen work. It didn't occur to me that Claude might also be learning about how I, Laura Beers, political historian, craft my sentences and translate my voice to the page. Yet there's increasing evidence that chatbots like Claude can be trained not only to regurgitate an author's content, but also to mimic their voice. Now I do wanna break in here for a second and say that we already knew that, like, years ago.

Linnea Lueken:

That's what AI does is it copies writing styles. That's one of the things that it's, like, fairly good at because it's a it's a pattern regurgitating machine. So it it does it does that quite well. The potential of AI generated literature to turn a quick and easy profit ensures that readers will continue to encounter more of this content in the future, especially as AI's large language models become more refined. Already, studies have shown that readers cannot easily distinguish AI generated forgeries from original prose.

Linnea Lueken:

Okay. So before I direct it to my my my panel here, you know, as a writer myself both in, you know, like, op eds and our white papers and all sorts of stuff, and Heartland produces a lot of written content. But more than that, you know, a a voracious reader of fiction and also nonfiction, it's my personal opinion. Others on this panel might have a different opinion that this is terrible. If someone doesn't care enough to put the labor in to create a novel or a short story or whatever, why should I care enough to read it?

Linnea Lueken:

Same goes for op eds and news, especially opinion content. I understand that some people don't share that kind of relationship with the stuff that they consume, but I'd like to know, you know, everyone else's thoughts here. If even if AI is able to write in a unique voice or copy your voice in particular, which it doesn't do without pretty substantial prompting right now, would you still want to read it? And my next question for the panel would be, should there be disclosure laws on text and images for this kind of thing?

Speaker 4:

I too am a reader of fiction, and I've read a lot of of it, and I enjoy it greatly. And I wonder if this is really that much of a problem. So I'm going to, probably disagree with you slightly. For me, there's a there are a couple of angles to this. One is is liability, and the other is, truthfulness.

Speaker 4:

So one of the problems that you have is that you cannot, copyright a writing style. You cannot copyright a story structure. You can't copyright those things. You have you can only copyright the final product. So the question then becomes one of morality and of fraud.

Speaker 4:

So

Donald Kendal:

you

Speaker 4:

could say, well, it's wrong to do this, which appears to be what the the writer of the article complaining about this was saying. It's it's it is wrong. Let's say it is wrong. So what? If if it's wrong, but but people enjoy those, romance novels or soon to be science fiction or suspense and so forth, then it's still their choice to read that.

Speaker 4:

And you can definitely, you you can definitely go to old books and read the actual old book and know that you're getting something real. But you couldn't there are times when I wonder if an older book that I've seen online is actually real. So there there are mystery novels that are supposedly written by somebody in the nineteen thirties. I've never heard of that person before, and I probably would have. And so it's it is deceptive, and it is wrong.

Speaker 4:

But how you get from point a to point b where you can actually regulate these things in a way that is not unjust and is not government overreach is very difficult to see. I would say that Amazon is responsible for the product that they're putting out. And if people stop buying Amazon books because they simply can't tell whether they're real or not, then that would be the best solution. But if people like these books and they they have no they've they've no diff they see no difference between a book like this and Bleak House by Charles Dickens, then I guess that there isn't much we can do about it. And so I I would caution that we we don't wanna go too far in figuring out what's the best way, of having the world, of making the world.

Speaker 4:

It isn't there is no best way. Everything has its trade offs, and coercion is generally nowhere near as good as liberty. So I think that we we we can acknowledge that something's wrong, but that there may not be much that you can do about it without creating much worse much worse follow on effects. And, yes, I am an AI robot saying this.

Donald Kendal:

Yeah. I I kinda have a, I don't know, a fairly nuanced position on all of this. I understand it. We've been impacted by this sort of thing. So, like, another, you know, that that, that graph that you showed about all the AI generated content, I think some of that, some people that are just, you know, trying to put together a book and and sell it or whatever, but there's, like, even more nefarious uses of it, and it's kind of the same way that we were talking about the scams in the first segment of this podcast where, we released, just earlier this year, the next big crash authored by Justin Haskins.

Donald Kendal:

That it was a self published book that he painstakingly wrote by hand. In fact, I don't even think he used a computer. He used a a piece of chalk in a in a slate outside or something like that to put that whole thing together. And so then as soon as, like, we basically hit publish on that and started selling it through Amazon, somebody took that book and wholesale copied it. They must have just taken chapter by chapter and just put it through an AI filter to say slightly change the words of this book and popped it into their own book called The Next Big Crash with a slightly different subtitle and then started selling it.

Donald Kendal:

And the idea here was, that there would be some percentage of people that wanted to go buy Justin's book and instead clicked on that fake one and bought that. So that is a very nefarious use of artificial intelligence. I think that's a very cheap use of artificial intelligence, and maybe even an effective way to use artificial artificial intelligence to commit some level of fraud. Now I think on the other end of the spectrum, I think there is wonderful use cases of artificial intelligence that are you know, I think I don't know if we talked about this on the podcast, but maybe it was going through my head the last time, we were talking about, like, the country song where there is this, story of this lady that was a poet. She actually wrote her own poetry and all of these sorts of things.

Donald Kendal:

But, you know, she wasn't a singer, and she didn't have a band, and she didn't have the resources to, you know, do what she wanted to do, which was take her poetry and make music out of them. She was able to do that using artificial intelligence. That's, I think, a wonderful use of those tools. Another example was, my wonderful wife during our anniversary. She used two separate AI tools to craft a song that was very kind of personal to us and, you know, whatever was in this very specific style and and to make a song.

Donald Kendal:

Now to do that, prior to artificial intelligence, what was she gonna, like, hire a band and and writes and write music, you know, out for all of this and have, like, a studios prepared to record all of this? No. But the difference and why I think that was a wonderful use of AI was that it actually took time and effort. And, you know, I could actually see the back and forth that she went through these tools and the trial and error to produce this thing that the end result was amazing as opposed to just, like, clicking in a prompt. Alright.

Donald Kendal:

Make me a song for my husband. Go. You know, that's, I think, cheap. That's kinda lame. You know, that's phoning it in almost literally.

Donald Kendal:

But using it as this powerful tool and and using it to the extent at which you can use that powerful tool, I think is a wonderful thing. You know? So if, like, and then it's just the policy realm. You know? It's like there are people that are are wonderful at writing.

Donald Kendal:

Whether or not that's just a a a craft that they were able to form over, you know, years of college and training and all of those sorts of things, you know, develop these skills to write in an effective way to get your ideas out to the public effectively. There are some people that are terrible at that. They might have wonderful ideas, but they just can't effectively put them down on paper for somebody to read. So it's like, should those ideas be trapped in that person's head, or could they use these tools to be able to get those ideas down onto paper so that they could effectively communicate them out to people? I think that is a little bit more of a blurry line of whether or not we think that's socially acceptable.

Linnea Lueken:

Yeah. Well, and I think the best example of how blurry and how difficult it is is probably in the art world. Right? Because people people spend, you know, decades improving their art and trying to learn how to draw and and paint and do whatever medium. And then you get the, like, AI generated art coming along and people are paying commissions for this stuff not realizing that it's AI generated.

Linnea Lueken:

And then they're like, what does it matter whether it's AI generated or not? It's you know, you got what you asked for. So why are you upset? And so, clearly, we have an idea of art, and I think literature goes under that category

Jim Lakely:

For sure.

Linnea Lueken:

That it's just if it it's a different kind of thing rather than just content. Right?

Jim Lakely:

Yeah. Yeah. I mean, my my wife is an artist. She's a painter. And, you know, she teaches others how to paint and, you know, you can tell the human for now, when you buy a piece of artwork, you know, that there's actual paint on that canvas or on that board or whatever.

Jim Lakely:

So it's it's pretty straightforward. You know, there's so much to say about this. I have so much in my notes. I'll just try to be brief though. But, you know, the market you know, we're a free market think tank.

Jim Lakely:

We believe that the market decides best. In this case, when it comes to art, I'm not sure that over time the market will match my preferences for art, literature, and movies, created the old fashioned way by humans with their human creativity using their brains, not a not an algorithm, not an an AI data center. And then the the question on the table is, should there be an AI label if AI is used to create something like a like a book, as you mentioned in the top of the segment, Linnea? I am not usually a big advocate of a law requiring, you know, the free market or the private sector to really do pretty much anything except make sure that their products don't kill us. But in this case, because it's happening so fast, I would not necessarily be opposed to some sort of label that says artificial intelligence was employed to create this thing.

Jim Lakely:

And I'm trying to think of it as, like, the label organic on your food or even just putting an ingredients list on the food that you buy. I mean, that's required by law because we have a right, not really a constitutional right, but it is best, and our our our elected leaders have decided that you really need to know all the ingredients that go into the food that you put into your body because that's a very important thing to know for a variety of reasons. So if I if I can think of an AI label in in those terms, I really have have no objections. But the, you know, the idea that artificial intelligence, you know, using AI Donnie, your your description of that is quite beautiful that your wife thought to do that and did it in a very personal way. And the whole idea that there's somebody who is creative but doesn't know how to do music, doesn't know how to write music but can use AI to find a way to express what they feel inside.

Jim Lakely:

That is very much, as you said, a very blurry line. But there is what you what you lose with that though. There's there's I think it was Thomas Sowell who said, there are no solutions. There are only trade offs. There are trade offs in everything.

Jim Lakely:

Like, look. I'm I'm a I'm a pretty I'm a very competitive tennis player. I play in several leagues. I I practice a lot. I work on my game.

Jim Lakely:

I actually have a very important match coming up on Saturday. If I could somehow hit a button to hit a great serve right when I needed it in a key point in a key match, and I just knew I had it, I could just do this, would I do it? No. I actually wouldn't do it because then I wouldn't actually be getting the reward of doing it myself. And that that that applies to art.

Jim Lakely:

It it applies to writing. You the the the human mind is very easy to dull, and I wanted to get into this later on, and I will get into this later on in our conversation. But unless you exercise your mind and your creativity, it will atrophy and it will eventually die. And what I really fear is that while, you know, my market choices are going to be and and I think yours too, Linnea, everybody on this podcast, is going to be directed toward actual human creativity and actual human writing of novels. I don't think that's going to be the norm in ten or fifteen years because it's just gonna be too easy.

Jim Lakely:

And then the the trade off for that is that we get less and less truly creative, innovative, you know, impactful art and creation human creation over time. And that's really, I think, going to be the the carry on effects of the use of AI that's accelerating every day.

Donald Kendal:

Yeah. You know, I I like the idea of, you know, like, the the little label that says this was, you know, this was partially produced with AI or something. But then there, of course, it's just like how do you enforce that sort of thing. I know that with, like, images or videos, there are proposals out there to have, like, I don't I don't think this is the term, but it's like a chain of custody sort of a thing. I don't know if it's like in the metadata or some, like, you know, in observable watermark that's kind of, like, in the background sort of a thing, which it actually exists in in modern day printers so that people don't counterfeit, like, money or whatever.

Donald Kendal:

It's like a very specific sort of watermark that you can't actually see.

Linnea Lueken:

Yeah. It's why your yellow ink always runs out even if you never use any color printing.

Donald Kendal:

Interesting. But, but, you know, when it comes to, like, writing something, you know, it's you know, how do you enforce that? I I don't even really know how you do it, other than running it through some, you know, one of those teacher things where it's like, is this work plagiarized and it's right 50% of the time and wrong 50% of the time? And it makes me think of, like, you know, like, in in, like, video game, like, e game, you know, realms or whatever. There's all of these safeguards in place to make sure that people aren't cheating.

Donald Kendal:

I was actually watching a video recently about an online, chess tournament. Because of cheating, there's, like, very specific camera placements that if you're gonna take part in this online chess tournament, you have to have a camera that's aimed down at your, you know, your keyboard and mouse to see, like, your actual hand movements making the movements. There has to be a camera facing, like, the opposite direction for you know, in case you have something that's helping you cheat, all of these random things. And it's just like they do all of those things to make sure that robots aren't helping you win in that chess match. What are you gonna put in place to make sure that somebody that's writing a book is doing it with their own two two hands as opposed to prompting AI in the background?

Donald Kendal:

I I don't know. I really don't know what the enforcement mechanism of something like that would be.

Linnea Lueken:

Well, one of the sorry, Sam. I'll I'll

Jim Lakely:

let you.

Speaker 4:

I'm glad you brought up the chess match, Donald, because one of the things that has happened is that because computers have become so good at chess, obviously, it's hard for real chess players to prove how great they are. So one of the things that happened is that they've they've created a new form of chess in which the pieces are laid on the board in the two in the two end rows at random. And so you don't know going in what the board is going to look like, so you can't plan in advance. And then you are able to show your real skill because both both players are starting from scratch without any kind of, what do you call it, any kind of guidance from an algorithm, which is you know, chess really is it really is subject to algorithms, and that's really what any chess offense or defense is. So that brought on some creativity and some very interesting creativity.

Speaker 4:

I think what we need to recognize is that none of this is a market failure. It's not that. What it is is some people like to cheat in life, and they can get away with it for a while because the law does not recognize that it's cheating, and they don't go after the cheaters. We have this very interesting, push and pull between regulation and simply using the civil courts and even criminal courts to regulate things. The latter, the common law, is much better and I think much more effective than using regulation.

Speaker 4:

Regulation always creates these these hostile, disputes over who gets the advantage from the regulation, who gets hurt by the regulation. Whereas holding people responsible for the things that they do through through the courts is much better. Now, of course, the courts get that wrong and so forth. There's no question about that. And you even have corrupt judges and corrupt district attorneys and and bad juries.

Speaker 4:

But on the whole, they are they are much less, well, they're certainly much less dominant over the society, over the people, and over the economy. So I think that we don't have to look to regulation first. We should first look to toward enforcing laws against fraud. The things that everyone's complaining about here are are basically fraud. What was done in regard to Justin Haskins' book, that was fraud.

Speaker 4:

And so we can we can get past this whole idea that you have to regulate everything in order to get, in order to get things to be good and decent and so forth. No. What you need to do is go after those who are doing things which are already they're already crimes. So I think that we we shouldn't get to, get so worried about potential consequences that we go overboard and use regulation in a way that's both unnecessary on and unwise. So I think that that's that's really a key point, and I'm glad you both, you and Jim brought that up.

Speaker 4:

And Linnea too, as as a matter

Jim Lakely:

of fact.

Donald Kendal:

Let me just address one comment here. The there's one from John one z one. Says, I don't believe there's any way that a computer can extrapolate information from life and put it down on paper. Computers don't have the motion. So I think that there is this idea out there that people like to kind of boil the large language model down to this idea that it's just predicting what the next you know, the probability of the next word.

Donald Kendal:

It's just like a word prediction thing, and it's all just based off of what information that they're just scraping from the Internet and putting it together. But there is actually a very novel ways of putting all of this, you know, these neural networks and the way that they try to, like, replicate the actual the way that the brain works in a human that allows it to do things that are beyond just, you know, copying, pasting something that somebody else already did on the Internet. And there's a famous example of this, which was, AlphaGo. So this is a program that was put together to play the game Go, which is very, popular in, like, you know, Asian countries. It's like they're equivalent to chess, something along these lines.

Donald Kendal:

Sorry. I don't know. Maybe I'm being offensive. But this this game called Go, it's actually considered to be even more complex than chess because of the amount of possible moves and all of the sorts of things. So anyways, AlphaGo was put into a contest with, like, the best Go players out there in, in the world.

Donald Kendal:

And, they were monitoring it. This was, a big test for whatever firm was putting this together. I think it was, like, DeepMind or something like that. And, at, like, move number 35, something along these lines, it it did a move that was like everyone that was watching it was like, woah. It just made its first massive mistake because nobody would do that.

Donald Kendal:

Like, nobody in the history of Go would ever make a move like that. And then it actually, by the end of the game, that move was pivotal in its in its, ability to win that game. It did a move that no human has done because it was able to holistically look at all of the rules of the game and all of the pieces that are in place and all of that stuff and make the determination that this move, which seems weird, is actually gonna, have the best, you know, outcome, you know, when its goal is to win the game. So it did it. It it did something novel.

Donald Kendal:

You know? This is, this is where we're at. Artificial intelligence can produce novel things. It's not just copying, pasting things from from other humans.

Linnea Lueken:

This is kind of a good transition actually into our our final topic here, which I'm sure we'll have a lot to talk about. So I'm not concerned at all that it's that we're we're not quite to the end of the show yet to get on to topic four. So this is we actually almost accidentally started talking about my next topic in this topic, which is, you know, finding balance between, you know, innovation and technology versus just, like, spiraling into dystopia. And where's the line? So AI is obviously not limited just to generating entertainment content or news articles or things like that.

Linnea Lueken:

There are uses that are not total slop, though I think we can debate how much we trust it given how LLMs, you know, like, large language models like ChatucheBT and Grock and Claude, whatever, often do still and will always generate nonsense nonsense answers and get stuff wrong. And I wanted to go over a few examples of where it's being proposed and used based on stuff that Donald sent to us. First, this article from OpenAI discussing the launch of ChatGPT for clinicians. So OpenAI writes built for clin clinical work, ChatGPT for clinicians is now available for free to verified individual clinicians in The United States. We're introducing ChatGPT for clinicians, a version of ChatGPT designed to support clinical tasks tasks like documentation, medical research so clinicians can focus on delivering high quality patient care.

Linnea Lueken:

We're making it free for any verified physician, NP, PA, or pharmacist. That scares me a little. Starting in The United States. The US health care system today is under extraordinary strain. Clinicians are being asked to care for more patients while man managing growing administrative demands and a rapidly expanding body of medical research.

Linnea Lueken:

Many are already turning to AI tools like ChatGPT for support. According to a 2026 survey by the American medals Medical Association, physician use of AI is now at an all time high with seventy two percent of physicians reporting they now use AI in clinical practice, up from forty eight percent last year. Today, millions of clinicians worldwide use ChatGPT to support their clinical care every week for applications like care consult, writing and documentation, and medical research. Clinician usage of ChatGPT has more than doubled over the past year. Okay.

Linnea Lueken:

So my question on this kind of thing is this. Is AI going to be used here, or is it being used? It sounds like it is in part. To eliminate, hopefully, the need for the colossal administrative department at every hospital and medical care system? Is it going to lower costs and improve the care of individual patients, or is this more of the, like, Obamacare checklist doctoring that we saw explode after the Affordable Care Act.

Linnea Lueken:

Right, Sam?

Speaker 4:

Yeah. It's I found it very interesting because as you say, it is sort of based on the the checklist, model, and that's what your doctor's doing now. When you go and sit there when you go to your doctor, they at least the ones I've heard of and the ones I've been to, your GP is just sort of going down looking at their computer and asking you the questions and then filling it in. And I I don't even know how much of that might already be AI. So it it seems a bit weird.

Speaker 4:

No question about it. But, of course, there's liability involved that the the doctor can be can be sued if he or she makes an error regardless of what it's based on. If it's based on AI, the doctor can then sue the the company that sold him the or her the product. So there, again, there are always ways of of dealing with this, and they're already in place. But I think that it's it's just an expansion of a trend that's already been going, and you are absolutely right to point out that the big the big source of the, reason behind all this is government, and then the insurance companies have followed what the government has told them they have to do in order to get their Medicare and Medicaid money.

Speaker 4:

So a big part of this would be get the government out of this. Let let, the people decide what they want and what they don't want and don't have the these giant insurance companies that you empowered through the Affordable Care Act. That that's the Affordable Care Act just gave enormous power to the insurance companies and everything that's happened since with some minor exceptions during the the first Trump administration has been to to increase that power. So, again, I think that all the the things that we need in terms of public policy and systems are already in place to stop fraud of this type and to hold people responsible for errors of judgment. And I think that this that's an important point, that when your doctor's going down looking at your computer and they're going down the list of things that they're supposed to ask you, they have to be, do making judgments as they go along.

Speaker 4:

And I think that that's been my experience with with my GP and other doctors is that they they seem to be making judgments as they go along. Maybe they're AI, and I didn't even realize that they're robots or something. But but that's it's when you get to the human part of it that responsibility comes in. There you you can't make a how are you going to make a a a a program responsible? It does there's no way all you can do is is is delete it or destroy it, but that's not the problem.

Speaker 4:

The problem is that people are using poor judgment. They're making mistakes. And what we cannot do is let them blame it on a computer and say, well, the computer told me to do this. You don't have to do what a computer tells you to do. You can use your judgment.

Speaker 4:

And so this is we have like I say, we have the pieces in place to to solve these problems, and we will. We will. As long as we do not give ourselves over to the absurd fancy that governments will solve these problems. They will not.

Donald Kendal:

Yeah. When when I see, like, an article like this well, actually, let me frame it this way. When you listen to some of these, like, tech people, whether it's like Elon Musk or, you know, whoever, talk about the future of, like, you know, AI and robotics when it comes to the health care system. It is very easy to just be very hopeful for, like, the future. And, you know, when you got Elon Musk talking about how, you know, advanced, what is his robot called?

Donald Kendal:

It's like a transformer name. What is it called? Optimus. Optimus. When it talks about how Optimus is gonna be better, at brain surgery than the best brain surgeons out there, and we're gonna be able to train them in, like, a day, you know, because they just have to have the, like, the matrix just plugged in and, hey, all of a sudden, the the the robot knows kung fu.

Donald Kendal:

It knows brain surgery. Right? So he says, like, you know, within ten years, we're gonna have more of the best brain surgeons ever on planet Earth, and they're all gonna be robotic. And you are gonna get better health care than the president of The United States is currently getting. And, you know, the way that artificial intelligence is tied into all of these things, it's gonna make, you know, your level of health care all the better.

Donald Kendal:

And and at some level, you know and maybe this is due to me being so disillusioned with, like, all of our institutions in society, where, like, I could actually believe that, you know, where it's like the system that we have in place to train a brain surgeon is so long and they get bogged down in all of this debt and all of these, like, sorts of things to a point where we have, scarcity in in brain surgery or just doctors and all of that. Like, they can all be alleviated, you know, theoretically with advanced robotics and all those sorts of things. And then when you go to a doctor and, you know, you have to wait sometimes months to be able to get an appointment, and then they come in for five minutes and ask you what's going on. They look at your chart for two seconds and then, you know, give you a prescription and kick you out the door or whatever. With artificial intelligence, theoretically, all of your information would be able to be sifted through in a in a microsecond by AI.

Donald Kendal:

They'd be able to cross reference whatever ailments that you have with all of the text that's out there when it comes to health care and be able to give you, like, a much more comprehensive diagnosis on something. I can believe that. I feel like that is something that is, like, very possible. On the other side of things, I actually heard some commentary talking about this this specific piece here, which was saying that, you know, that, and and I think that piece even points this out that, like, already, you know, some high level of practitioners are already using some copilot, AI copilot, sort of program to help with their, you know, treatment and all of that. So why is why is, ChatGPT kind of jumping in?

Donald Kendal:

And one of the things that they said, this is speculation, of course, is that, like, well, maybe this just kinda feels like, a way of gathering data on all of these patients so that they can use it for x, y, and z later. So, like, that is another thing that we have to that that goes along with that in the future, you'll own nothing. The other part of that headline that's usually left out is in the future, you'll have no privacy. That that's part of this. That's part of this whole AI future is that it all is based on the data that is able to be accrued by these AI systems.

Donald Kendal:

And the more intimate the data, the more able that it'll be able to, you know, impact your life in a, what they would say, a good way. But, that requires you giving your data. That requires you giving your medical history. All of that that stuff is fed into these large language model frontier type AI systems that are used, you know, and and it's a very it's basically it's a very centralizing force, is is my main point. And that comes along with a lot of hesitations and concerns, which, you know, we can get into that probably more so in the next, article you're gonna show.

Jim Lakely:

Well, let me can I just jump in here real quick? Because I Donnie, I was about to make some of those points, but I think I can take this out a little bit broader if you didn't hit every one of the points I wanted to make. But, you know, isn't this what AI was supposed to do for us? You know, when is it gonna cure cancer and other dread diseases? I mean, isn't AI at its best use is to quickly learn potentially, quickly learn and and analyze the entire history of medicine and human history?

Jim Lakely:

Or let's just even narrow that down a little bit. How about absorb all of the information in the medical journals of the past 100? Unlike a human researcher, if they start from scratch at the age of 10, they'd never be able to get through all of that literature. But AI, at its best, in this application, becomes almost omniscient. It can consume more medical research and results, presumably all of it, almost instantly.

Jim Lakely:

And that is, again, impossible for human beings to do, especially, Donnie, as you pointed out. Doctors, through their careers, self segregate into specialties. So their intellectual and analytical acumen gets necessarily narrowed down to their specialty. But AI, they can take all that data from everywhere and, you know, propose treatments or cures that have not yet been tried yet by humans because we don't you know, human beings don't have the analytical ability to do all that. So AI, in in a sense, could be both a generalist, a general practitioner, and a specialist at the exact same time, and then give some you know, suggest some solutions that humans can then try and see if it works.

Jim Lakely:

That to me is the most the greatest application of AI, and it would do the best for humanity overall by miles and miles. You know? Forget about, you know, writing my term paper for me. You know, curing diseases or coming up with treatments to make human life better and longer is what AI really should be primarily focused on in my opinion. When is this going to happen?

Jim Lakely:

I mean, is anyone trying to make this happen? I don't know, but I haven't seen any evidence of it yet. And I hope we see some evidence of that soon because that's where AI can do the very most best good, the most good for humanity.

Donald Kendal:

Alright. So I got a few answers for you there, Jim, on that question. So I actually wrote an article that's pending publication right now that's that was titled something along the lines of, like, AI needs to cure cancer. So it's, like, literally what you're talking about. And and the the basis of my article was, like, like, AI is their PR is in the gutter.

Donald Kendal:

Everyone is skeptical of AI. They're very hesitant about it. They think that the the the negatives are gonna outweigh the benefits in society. And along with that kind of mindset is going to be heavy handed government, either regulation restrictions, what have you. Big example of that Bernie Sanders, AOC won a moratorium on data centers.

Donald Kendal:

Right? Stop the stop building these things out. Right? And and my the point of my article was, like, it needs to do something big. Like, it needs to do something more than just create pictures of Andy standing next to a donkey.

Donald Kendal:

Like, it needs to cure cancer so that it could win back, that public trust or interest or whatever to ward off these heavy handed government approaches. In that article, I I do mention that there are, like, specific movements in that direction. I don't know if it's for that reason that I kinda paint in that in that article, but OpenAI has a foundation. It's called the OpenAI Foundation, and they are committing, like, a billion dollars to, you know, over the next ten years or something like that to solving, you know, cancer and Alzheimer's disease and that sort of thing. And And they actually just recently announced, I mentioned this in the article, they recently announced, like, six or seven different, research firms that they were giving, like, $10,000,000 grants to specifically to try to, you know, cure Alzheimer's, cancer, you know, these these diseases that have plagued humanity for so long, all of that sort of thing.

Donald Kendal:

When all that's gonna happen, I have no idea. How feasible is all of that? I don't know. We keep getting this promise that AI is going to do these things, But now we do know that there is some money being thrown at it, I guess.

Linnea Lueken:

Yeah. So I have two two kind of general comments. One, I I'm I guarantee that if if chat GBT existed during 2020, chat GBT would be telling you to stay six feet away from people for safety.

Donald Kendal:

Yes.

Linnea Lueken:

It would it would be telling you to mask up.

Donald Kendal:

Yes.

Linnea Lueken:

It would be telling you all the stuff that the mainstream medical and government establishment is telling you to do. It's not going to bypass the mainstream narrative ever. So it's not I don't think actually that unless they really kind of lighten up on the, like, the media feed to it. If you made a separate man, I don't even believe what I'm about to say. I I was about to say that if you made a separate, you know, Agenic AI program that only looked at, you know, like, real medical data and pay published papers, that kind of thing, then it would be better.

Linnea Lueken:

But I actually don't believe that because they publish a lot of BS papers. And so it would be it would be it would be just filled with junk at the same time. So I I really I have a lot of skepticism about

Donald Kendal:

about that. You're describing is, basically, like like some sort of AGI, like an artificial general intelligence. So it's a and and so I agree, like, about all of those first points that you made. Mask up, six feet apart, all of these sorts of thing. Because in part, that's kind of that that mainstream thing.

Donald Kendal:

They're scraping all of this data. The majority of the, you know, reputable sources are saying this, so it's gonna pair at those reputable sources. And to, but, like, at a different level, you know, like, I was talking about that AlphaGo thing. Like, it was able to, you know, use machine learning and just really, like, holistically understand the game to be able to make that, like, weird, you know, movement that wasn't, you know, in any Go manuals of how to be an effective Go player. Like, it did this thing that was outside what the the official kind of narrative is.

Donald Kendal:

Like, that that's, like, what you're talking about. You know? If there were some, like, artificial general intelligence that was able to look at holistically all of this stuff and decide that, you know what? A better treatment for COVID isn't doing these things. It's actually x, y, and z.

Donald Kendal:

You know? That's that's the future that we're promised when it comes to AI. But, yes, I think that in the short term, we're definitely not there yet. The other thing I was gonna say that was skipping my mind was that, some of these biases are very intentionally programmed in to artificial intelligence. I always use the example of when Google Gemini was producing very culturally diverse pictures of our founding fathers.

Donald Kendal:

You know?

Speaker 6:

Yeah.

Donald Kendal:

They did that because Google Gemini programmed in a thing to their, like, underlying prompts saying that they wanted more diverse outcomes of their picture generation. They consciously did that. So, those sorts of biases exist. If you were to ask, which I did, I asked AI to put together something that was basically, saying all of the data, data centers, the energy use, all of the energy demand that's gonna be needed over the next x amount of years, what would be the best, power source to power that future demand? And it said, oh, wind wind and solar because that's the most sustainable.

Donald Kendal:

And it's like even Larry Fink doesn't believe that, chat GPT. Why are you feeding me this garbage? It's a part partially because surely there's some bias that's programmed into it, you know, to be environmentally conscious or whatever. And then that just that mainstream narrative rhetoric that it's scraping from the Internet. Yeah.

Linnea Lueken:

Absolutely. And and the second my second kind of downer opinion, to bring to bring down our optimism, I actually think that it's reasonable to be fairly optimistic about technology in the future. I'm not like a total doomer on this stuff, actually. But I do the the the the caution comes more naturally to me than the optimism does when it comes to a lot.

Speaker 6:

I'm like No. God.

Linnea Lueken:

So simply because I have lived on Earth and I've witnessed the way that people use technology that we have and the unforeseen consequences of, like, social media and stuff have been just, I think, just catastrophic. And so I another thing that makes me skeptical though is the fact that Elon Musk is talking about Optimist, his bipedal robot doing brain surgery. Why would you have the bipedal robot be doing brain surgery? It doesn't have to be humanoid to do brain surgery. It'd be an MRI machine looking thing that you slide into, and it has all the little jabbers and stuff come out and do brain surgery way more efficiently, way more effectively than a bipedal robot.

Linnea Lueken:

That's why I don't I really don't take these guys very seriously because they don't they have big ideas. And I think Elon Musk is a brilliant ideas guy. And he's got all these great he knows how to get these great engineers to, you know, put these ideas into practice. But, I mean, he's not he doesn't bat a 100 on these things. I mean, The Boring Company reinvented a subway and pretended like it was a new thing.

Linnea Lueken:

Like, it it's really I I have a lot of skepticism when it relies on stuff like that. Stuff where it's like, we are going to send optimists to Mars. We are going to send optimists to do brain surgery. That seems like goofy sci fi stuff that makes no practical sense economically or even to achieve the thing that he's saying he cares about achieving. If you cared about achieving a robot doing brain surgery, you would build a specialized machine to do brain surgery.

Linnea Lueken:

You wouldn't multipurposing optimist to do it.

Donald Kendal:

Yeah. That's fair. I I will say that, I actually just saw a video today of, Elon Musk retweeting something from Neuralink, which is the kind of the brain implant thing. And it's the the robot that's built to actually implant this, you know, to put the little tendrils into your neurons or whatever of your brain is what you're describing. It's this, like, very specific looking it looks like something out of, like, 12 monkeys or something like that, like some weird sci fi movie.

Donald Kendal:

But, yeah. So it's not like an optimist robot that's coming at you with some tools and then brain implants. But so I don't know. Maybe he's talking about it more, like, abstractly. I'm not sure.

Donald Kendal:

But I Yeah.

Linnea Lueken:

I would hope so. But okay. So the other thing that you sent us, Donald, was about The UAE planning an AI run government within two years. From the article that you sent, the country says it will integrate agentic agentic artificial intelligence across half of its government operations within two years, referring to systems that can analyze information, make decisions, and take action with minimal human input. In this model, AI can process requests, adjust workflows, and improve outcomes in real time.

Linnea Lueken:

So how would that show up in everyday ways? Think faster permit approvals, automated public services, or systems that respond instantly to changes in demand. Instead of waiting for human bottlenecks, processes move continuously. For all the excitement, this kind of rollout raises real concerns. Critics point to accountability as one of the biggest question.

Linnea Lueken:

When AI systems start to make decisions inside of government, it can become harder to understand who's responsible when something goes wrong. Government systems already handle sensitive personal data. Expanding AI across those systems would increase how much data is collected, analyzed, and stored, which makes some experts uneasy. And there's also the issue of bias. AI models learn from data.

Linnea Lueken:

And if that data has gaps or flaws, the outcomes can reflect that. In a government setting, that could affect access to services, approvals, or enforcement decisions in ways that are not always obvious. Well, from the perspective of government, these are all great. Right? All of these worries aren't worries at all.

Linnea Lueken:

These are part of the program. Right?

Donald Kendal:

Yep. So this is a a very interesting story. And, you know, I could be like the AI optimist or pessimist depending on, you know, who I'm talking to or what day of the week that you catch me on.

Linnea Lueken:

Yeah. I'm the same way.

Donald Kendal:

But, this this story, also has me similarly conflicted because we've talked about this to a degree. I think there was a story about was it like someone in Sweden? Was it the prime minister of Sweden or something like that? Switzerland, maybe? I don't know.

Donald Kendal:

One of them that was talking about how they use ChatGPT, and we bounce public policy things off of ChatGPT that helps me make decisions. And we kind of extrapolated that out to, like, you know, at what point is is AI gonna be, like, running parts of the government. Fast forward a few months and it's, right now, The U United Arab Emirates is talking about, you know, automating essentially 50% of the government, you know, with with artificial intelligence. And some of the things like that, streamlining permits and, you know, some of these, like, bureaucratic stuff is actually probably a wonderful thing. You know?

Donald Kendal:

The idea that, you know, that getting rid of some of the government bloat and just replacing it with an algorithm, is probably a fine thing to do. But it does kind of worry me, like, the idea of mission creep or something along those lines. We're like, you know, we we allow machines or whatever to do x, y, and z in our lives because we trust it to do that. But if it becomes super good at doing this and it's, approval of permits has a 100% track record, right, accuracy and all of the stuff, then it's like, well, you know, maybe maybe we put in charge of a few more things. You know?

Donald Kendal:

Maybe some, like, lower level, you know, justice system decisions could be made, you know, determining if whether or not you did run that red light. Like, why do we need a judge doing that? Maybe just AI can do it. And then as it's, you know, becomes efficient in doing those things, like, we start putting more and more and more in control, of artificial intelligence. That becomes a little bit of a scarier picture for me, you know, then all of a sudden we're we're talking about AI surveillance states.

Donald Kendal:

We're talking about, you know, minority report and, you know, we already had Tom Cruise on this, on the show once popping up in the bottom corner. So, you know, if he has to come up again, I don't know. It's probably maybe not a future that we necessarily want. At least certainly has pitfalls. So it's a very interesting story.

Donald Kendal:

And I know, you know, with United Arab Emirates, they obviously don't have a, you know, democratic system over there. So they can they can kinda speed this thing through. You know? They don't need a whole lot of approval process when it comes to, you know, taking 50% of their government and automating it. So it'll be interesting to see how this all works out in some of these countries that are a little bit more forward when it comes to doing things like this.

Donald Kendal:

But certainly, there's gonna be calls in The United States to do similar things. Whether or not those are a good idea or a bad idea, I guess, remains to be seen.

Jim Lakely:

Donnie, you talked about how AI needs better PR. Yeah. If you want good PR for AI, put the put AI in charge of the DMV. If it can end that horrible experience for Americans, it will be next month where they'll allow AI to operate on their brains and deliver their children.

Donald Kendal:

Right. Person of the year.

Speaker 6:

DMV. Yeah.

Linnea Lueken:

Our, certainly, our government right now seems to think that AI has, you know, needs to we need to accelerate into it. Last year, the president signed an executive order removing barriers to AI development. And this is another one where I think that we actually have a little bit of a complicated relationship with this executive order, let's say. Because it's it's kind of stomping on federalism a bit or just straight up it is. But it but it so I I'm very interested to hear your thoughts, especially you, Sam, because this is a bit of a complicated situation here.

Linnea Lueken:

He writes in the exec well, the executive order states, we remain in the earliest days of this technological revolution and are in a race with our adversaries for supremacy within it, which is true. To win, United States AI companies must be free to innovate without cumbersome regulation. Fair. But excessive state regulation thwarts this imperative. State by state regulation, by definition, creates a patch work of 50 different regulatory regimes that makes compliance more challenging particularly for startups.

Linnea Lueken:

State laws are increasingly responsible for requiring entities to embed ideological bias within models. For example, a new Colorado law banning algorithmic discrimination may even force AI models to reduce false results in order to avoid a differential treatment or impact on protected groups. Third, state laws sometimes impermissibly regulate beyond state borders impinging on interstate commerce. My administration must act with the congress to ensure that there is a minimally burdensome national standard, not 50 discordant state ones. The resulting framework must forbid state laws that conflict with the policy set forth in this order.

Linnea Lueken:

That framework should also ensure that children are protected, censorship is prevented, copyrights are respected, and communities are safeguarded. A carefully crafted national framework can ensure The United States wins the AI race as we must. So, you know, this one's this one's tough for me because certainly each of those concerns are true. Right? And yet, as a whole, it sends a little bit of a chill up my spine here.

Linnea Lueken:

I'm I'm so completely divided on this issue. On the one hand, I think the AI companies themselves are mostly evil. You know? I have no trust whatsoever in the benevolence or the sanity or good sense or anything of Google or Sam Altman or Palantir or anyone. I think they're just as corrupted and controlling as government.

Linnea Lueken:

You know? But at least in theory, I get some say in my government. So I don't know.

Donald Kendal:

I've seen enough Terminator movies to know that Linnea is in danger right now with that dog barking like crazy. So

Linnea Lueken:

Yeah. There's actually yeah. There's a guy, like, melting through my front door right at the time.

Speaker 4:

There's a you know, the the way this works for for Trump is that his executive orders often have a whole lot of rhetoric at the front that sounds really good or really bad or a mixture of both. And this one kind of does that. And it but then when you get to the actual policies that that he is ordering the government to do. They're generally things that he's ordering the federal government not to get involved in certain things, and and he's ordering the federal government to use their fiscal ability to effect. So part of this the the the ruling said he the rules that he has in the EO in this executive order are that the federal government can't spend any money or send any money to states that are violating these principles.

Speaker 4:

So that's fair. That's fair. If if you want to take the federal money, you have to you have to do what the government says. That's always been true, and, maybe we need to shrink the federal government a little. Maybe we need to shrink it a heck of a lot.

Speaker 4:

But the other thing is that, one of the things that he was is saying there is that if a state law, supersedes federal law, you have to go with the federal law. Well, that's straight out of the US constitution. If we don't like it, we can we can amend the constitution and go back to the articles of confederation, which I wouldn't be averse to. I mean, let's give it a try and see how it goes. But the but the the fact of the matter is that that's a very straightforward approach.

Speaker 4:

So most of the things that he's actually the approach that he takes to do these things is legitimate and and I would say reasonable. I'm not sure that I actually like the premises that that he is that he has outlined here. And and the the fact that states for example, that that states will say I I think it was brilliant and so true that he said states will tell the computers they have to lie. That's what he he's referring to ESG and DEI and things like that. And, yes, that is wrong, and it is downright fraudulent.

Speaker 4:

Well, who can stop that? This other states can't stop that. The citizens of that state can't stop that except by throwing out the government, which, of course, has been lying to them all along, and now they're somewhat brainwashed or at least enough of them are brainwashed. And then, of course, they are going to use AI to invent new voters that will vote for whatever the whatever government is there. So and and in our constitution, in our system, the federal government is the is the the entity that is supposed to keep the states in line, that they're to guarantee that it stayed as a republican form of government.

Speaker 4:

I noticed they didn't say democratic, and I appreciate that. But the but the the the fact is that it is a federal government's job to do that. So I I think that some of the things that he's saying that they should do, I don't agree with them, but I do agree with the the policy approach that he's taking. And so this is it's a mixed bag. It's just like he had, he had that executive order on homeownership, and I wrote about that, which was that there were some very good things in it, and then there were some very problematic things.

Speaker 4:

But he's obviously trying to do the right thing, trying to find a way through this where the federal government can do what it's supposed to do and not do what it isn't supposed to do. Unfortunately, that's a really hard, you know, hard, angle to find, and I don't think you're ever going to do a great job of it. What we need to do is go to the, idea of civil and criminal liability and follow-up and follow-up that road, where if you do something that is already fraud or you do something, for example, lying to, lying to people about the, let's say, the consequences of certain policies or lying to people about the percentages of, people that do certain things, then that's fraud, and you should be held accountable for that. And as I said earlier, you can't hold a model and an algorithm accountable for anything. You can't hold them responsible for it.

Speaker 4:

It's gotta be the people who are deploying those things. And that's what the the EO is doing, the executive order is doing, is saying we're going to hold the states responsible, and the way we're going to hold them responsible is that money you thought you were going to get isn't going to come your way. So it's a mixed bag. But, again, you really as you saw across all these issues, you really have to just deal with what comes up as it comes up. You can't you can't just say and and from good principles.

Speaker 4:

You can't just say, okay. We have a solution that's going to last for everything forever. Things don't work that way.

Donald Kendal:

Yeah. I I'm gonna kinda broaden it out a little bit just to make sure we hit that dystopia, thing that's in the in the thumbnail. We're artificial intelligence, you know, and and let me preface this by saying that what I'm about to say is not some, you know, advocation of destroying artificial intelligence and whatever. Right? It's a very powerful tool, which means it could do very awesome things and it could do very terrible things.

Donald Kendal:

And what AI allows for is a level of control over society that would make the Soviets drool. Okay? So, like, the Soviet Union, controlled, centralized, economy, obviously. That's, like, in their name. Right?

Donald Kendal:

You know, their their failures, a lot of them, could be attributed to the fact that they were having they were they're operating based on bad information and inability to synthesize that information in a way that was helpful. Right? This very, very broad way of talking about this is, you know, they wanted to make bread, and doing so based on the information that was coming to them, how to make what amount of bread to make and how to allocate that bread. They did it poorly. That resulted in shortages and starvation.

Donald Kendal:

Right? It's a very broad way of thinking about this. In the future now, all of that information could be collected and synthesized by artificial intelligence, and they could be much more effective in how to allocate that centralized bread making, you know, process. That's just a very small example, but extrapolate that over everything. You know, when I talk about ESG.

Donald Kendal:

Right? ESG, is this this idea that we're gonna supplant customer wants, supply and demand, price signals, all of these things with a list of subjective metrics that we want all of these corporations to do, sort of, social credit score for corporations. They get punished if they don't, you know, do all of these subjective things. They get rewarded if they do, that that sort of thing. That would not have been possible twenty years ago.

Donald Kendal:

You can't just do that with paper. You would need armies of bureaucrats pouring over all of these different metrics and doing math on pieces of paper to generate scores for these things. Now it's possible. Now because of the digitalization of all of these things, it's all could be centralized and pulled in and use algorithms to determine these scores and all that sorts sorts of stuff. That that concept right there of ESG extrapolated, and empowered by artificial intelligence allows for a level of control that would used to be reserved for episodes of Black Mirror.

Donald Kendal:

You know, like, science fiction where, you know, people get a little ding on their license if they, you know, do something, that wasn't in line with what the the ruling party wants. That's that's what we're talking about with, you know, this this threat of dystopia. I kinda wonder how it's gonna play out in China who is, you know, second place when it comes to the AI race, and they have no hesitations of trying to compel or coerce their, you know, constituents to act in certain ways. If they're doing that with the power of the government, and the strength of, or I should say the power of AI and the strength of the government, who knows what level of dictatorship they're gonna be able to to to exercise over their population. That's, like, the big fear that I have.

Donald Kendal:

And anything that we can do to to kind of warn people of that, put up safeguards about that is, you know, being proactive in my opinion.

Linnea Lueken:

Yeah. Absolutely. I mean, yeah, because our vision, I think, for a long time just because it makes a more compelling story to watch on television and stuff or to read in novels and whatnot is the, like, the hard dystopia of, you know, 1984 or Brave New World or anything else like that, Black Mirror stuff, where and maybe not so much Black Mirror stuff because they do a little bit what I'm talking about too. Whereas in reality, it's kind of the soft dystopia that I think is a lot more likely

Donald Kendal:

Yeah.

Linnea Lueken:

Where you just it's like what the what the World Economic Forum was pushing. Right? It's the you'll have no privacy. You'll own nothing. You'll live in a rent an entirely rental economy.

Linnea Lueken:

You know? And it'll be every single thing about you is stored and calculated algorithmically to determine, you know, what And manipulated. To buy. Yeah. And manipulated.

Linnea Lueken:

You know, you've got the you've got the AI the RFK AI that's gonna not let me buy potato chips at the grocery store because I've already got too much sodium at home.

Donald Kendal:

Right.

Linnea Lueken:

Like, it's it's it's gonna be just like small just tiny little infringements all over the place constantly.

Donald Kendal:

Yeah. Yeah. Let let me jump in. Mean, hold on second. With the Fifth Element comments.

Donald Kendal:

Yeah. Multi pass. I that's why I was thinking with the point on the license. Thank you, John. So you could keep going, Jim.

Jim Lakely:

Yeah. Yeah. Yeah. I mean, the one one thing we've talked about on this podcast a long time ago, the book, The Great Reset. We've talked about The Great Reset, the World Economic Forum.

Jim Lakely:

What that all is is us resisting the drive to have us ruled by a technocracy of human beings who believe they are smart enough to make all the decisions for us and to direct society. We are we seem to be on a path in which we're going to replace the human driven technocracy that we do not trust with an AI technocracy. At the end of it all, your freedom goes away and your desire to live your life as you as you see fit is gonna be eroded. And it's that's and as much as we could resist the WEF and The Great Reset and all of that stuff, it's gonna be a lot harder to resist an AI driven government technocracy that's coming after our liberty.

Linnea Lueken:

Yeah. You're not gonna resist when it's wrapped up in comfort and convenience quite so much. Exactly. And that's, I think, the biggest threat. Not to end on a terrible note.

Linnea Lueken:

We'll end on a positive note, which is that AI is going to make Hollywood, worse and also maybe possibly better as well because they're gonna have to struggle with competing with AI content.

Speaker 6:

But I paid it for.

Linnea Lueken:

Yeah. I do too. Thank you, mister t. The and so and there are a lot of good things that can come from it. I do think that we'll see some medical advancement at least, probably not the, like, really pie in the sky stuff.

Linnea Lueken:

But, you know, with time with time and with caution, we can turn a very powerful tool into a positive thing.

Donald Kendal:

Sounds good to me.

Linnea Lueken:

I think that's it. Alright. That is all the time we have, unfortunately. Thank you everyone for your attention to these matters. We are live every single week on Thursdays at noon central, on Rumble, Twitter, YouTube, Facebook, all over the place.

Linnea Lueken:

Jim, what have you got for our audience today?

Jim Lakely:

We have the Climate Realism Show coming up on these very same channels tomorrow every Friday at 1PM eastern time. The Climate Realism Show, it's gonna be another great one. We'll see you there. Sam?

Speaker 4:

Thank you. Go to heartland.org and take a look at my paper on housing affordability. It explains in very simple terms and easy to understand charts exactly what has caused the housing affordability crisis and the overall affordability crisis. And it might not be what you think.

Linnea Lueken:

Alright. And Donald.

Donald Kendal:

Yeah. I was gonna also promote that affordability paper. It's a great paper. Just finished doing a radio interview about it right before I hopped on this podcast. So a lot of, great material in there.

Donald Kendal:

And also, if you wanna read my writing on artificial intelligence and emerging technology, just Google it. You'll find me. But, specifically, I have a lot of articles on the Blaze. I have a specific little profile on there. You can find all of my articles.

Linnea Lueken:

Alrighty. Thank you very much. For audio listeners, please rate us well on whatever service you're using and leave a review. Thank you so much to all of our usual panelists and also the director of awesomeness and to our viewers. And a special thanks to Andy, the producer as well for putting together that game for us.

Linnea Lueken:

Everybody put applause in the chat for him. And then we will see you again next week.