Practical AI

As AI increasingly shapes geopolitics, elections, and civic life, its impact on democracy is becoming impossible to ignore. In this episode, Daniel and Chris are joined by security expert Bruce Schneier to explore how AI and technology are transforming democracy, governance, and citizenship. Drawing from his book Rewiring Democracy, they explore real examples of AI in elections, legislation, courts, and public AI models, the risks of concentrated power, and how these tools can both strengthen and strain democratic systems worldwide.

Featuring:
Links: 
Sponsors:
  • Framer - The website builder that turns your dot com from a formality into a tool for growth. Check it out at framer.com/PRACTICALAI
  • Zapier - The AI orchestration platform that puts AI to work across your company. Check it out at zapier.com/practical
Upcoming Events: 

Creators and Guests

Host
Chris Benson
Cohost @ Practical AI Podcast • AI / Autonomy Research Engineer @ Lockheed Martin
Host
Daniel Whitenack
Guest
Bruce Schneier
Bruce Schneier is an internationally renowned security technologist and the New York Times bestselling author of fourteen books, including Data and Goliath and A Hacker’s Mind. He is a Lecturer at the Harvard Kennedy School, a board member of EFF, and Chief of Security Architecture at Inrupt, Inc.

What is Practical AI?

Making artificial intelligence practical, productive & accessible to everyone. Practical AI is a show in which technology professionals, business people, students, enthusiasts, and expert guests engage in lively discussions about Artificial Intelligence and related topics (Machine Learning, Deep Learning, Neural Networks, GANs, MLOps, AIOps, LLMs & more).

The focus is on productive implementations and real-world scenarios that are accessible to everyone. If you want to keep up with the latest advances in AI, while keeping one foot in the real world, then this is the show for you!

Jerod:

Welcome to the Practical AI podcast, where we break down the real world applications of artificial intelligence and how it's shaping the way we live, work, and create. Our goal is to help make AI technology practical, productive, and accessible to everyone. Whether you're a developer, business leader, or just curious about the tech behind the buzz, you're in the right place. Be sure to connect with us on LinkedIn, X, or Blue Sky to stay up to date with episode drops, behind the scenes content, and AI insights. You can learn more at practicalai.fm.

Jerod:

Now onto the show.

Daniel:

Welcome to another episode of the Practical AI podcast. This is Daniel Witenack. I am CEO at PredictionGuard and joined as always by my cohost, Chris Benson, who is a principal AI research engineer at Lockheed Martin. How are you doing, Chris?

Chris:

Hey. Doing great today, Daniel. How's it going?

Daniel:

Oh, it's it's going good. You know? Lot lots of interesting things to talk about as we head into the new year and especially, you know, a lot of people thinking about how technology and AI especially is impacting both our daily life and geopolitics and all all sorts of things. And, we're really privileged today to have with us, Bruce Schneier to talk about some of these things. Bruce is a fellow at the Berkman Klein Center for Internet and Society at Harvard University.

Daniel:

And also, we'll we'll be discussing a little bit his new book, How AI Will Transform Our Politics, Government, and Citizenship. Welcome, Bruce. Great to have you on the show.

Bruce:

Thanks for having me.

Daniel:

Yeah. Yeah. It's, of course, a lot of as I mentioned, there's a lot of these topics, that are on people's mind going into this year just, you know, looking at things that are happening in the news, how AI is is factoring into those, how there's various disruptive things happening across our world, how AI might factor into those. Maybe you could just open us up by setting the stage for why this combination of things was important for you to think deeply about, write about this kind of, intersection of AI, democracy, citizenship.

Bruce:

You know, I've been writing about AI and technology for a while, and it felt important to talk about AI, not just in a corporate concept context or in a financial context, but in a democracy context. I mean, is going to affect kind of every aspect of society because society is about people and AI is in some ways an artificial people, you know, of varying quality and capability. And we can think about them in terms of companies and consumers and workers, but we also think of it in terms of citizens. And I wanted to look at AI and democracy, how the tool interact, how AI will affect democracy. My co author is Nathan Sanders.

Bruce:

He and I have been writing about AI and democracy. And you know, someone very smart once told me that you should think about writing a book when you start having book length ideas. And when our essays sort of turned into something more in our head, we thought about writing a book because there's a lot going on here. I mean, you know, everyone thinks about deep fakes and they stop. But you know, to me that is the least interesting thing about AI and democracy.

Bruce:

They're so much more interesting. And we tried to cover all of that in the book. And I think we did a good job. It was a lot of fun to write.

Chris:

It feels like we're at a very particular point in that there are several kind of key elements coming together in all of our lives right now. We obviously have, you know, AI disrupting lives, changing the way we live and work, and and and what security means, obviously. But we also have these geopolitical events that are happening in the world in terms of what is democracy, both here in The United States and abroad. I'm curious how do you as as you're looking at all these big, big world events that are that's impacting everybody on the planet now, how do you see all these coming together in a big picture? And it feels like your book is very relevant right now.

Chris:

And so can can you talk a little bit about the book in the context of all these things that are happening in our lives and in the news right now?

Bruce:

Yeah. So it's interesting. There's a lot that the book doesn't say about what's happening in the news. The book is really about how AI can make democracy better. Now, in my mind is a power enhancing technology.

Bruce:

If you like democracy, AI will help you make democracy better. If you hate democracy, AI will help you make democracy worse. It doesn't have an intentional stance. It takes the stance of the people who wield it. Now, so I guess I wanna start by laying the breadth of what I'm talking about.

Bruce:

So again, more than deep fakes, the book is divided into five basic parts. In the first part, we look at AI and elections. And that's everything from authorized AI avatars that are used in Japan and in Brazil and other countries to interact with voters. AI being used for different aspects of campaigning, setting up websites, doing messaging, AI and polling, AI helping with get out the vote door to door knocking, sort of all of AI and politics. The second part is AI and legislation.

Bruce:

How AI is helping write, amend, debate and pass laws. And this includes a French AI model that lets legislators write better law to a Chilean model that looks at legal interactions, sort of all the different parts of AI and legislating. Part three, we talk about government administration, ways that we can use AI to make government more responsive. Now, Elon Musk can use AI to make government less responsive if he wants, but there are ways to use AI to make government more responsive, to figure out benefits, to audit contracts or citizens or different compliance documents, to, you know, help the patent office look for prior art. I mean, all sorts of things.

Bruce:

Part four is AI in the court system. Different ways AI can make the courts more efficient. And this ranges from Brazil using AI to help schedule judges and cases to make their courts run more efficiently, to judges using AI to maybe query the plain meaning of a term. And our fifth part is citizens. How AI can help citizens organize, make their voices heard, advocate, figure out who to vote for, who to support, to be better citizens.

Bruce:

So that's the breadth. We have examples from all over the world. This is not a US book. This is very much a, here are cool things happening in so many different countries. And we're largely looking at the good things.

Bruce:

We don't talk about how AI can be used to save democracy or destroy democracy, right? This is really, if you have a democracy and you like it, here are ways AI will change what's going on. Some of them positive, some of them negative, but it really is AI in a functional box. We wrote it mostly in 2024 and then into 2025. So it, you know, it's not really about what's going on in The United States right now.

Bruce:

It's more about democracy in general.

Daniel:

Yeah, I really like how you're framing. Number one, it looks at this across these different areas or sectors or geographies, what have you, but also you're talking about, I love what you said AI more than sort of deep fakes. There's kind of two elements that I'm thinking about that triggers in my mind when you're talking about this. One is AI itself is quite a broad term and covers many things. And secondly, the ways that it can be applied to your point might be positive, might be negative, or there might be security related issues, related to that.

Daniel:

If we tackle the first piece of that, some people might think, oh, I'm hearing AI as applied within the courts or within legislation, and thinking about their own personal experiences interacting with what what is most visible as AI, these, you know, chat systems, general purpose chat systems, and understanding that they make mistakes and all of this stuff and and sort of be worried about, well, is AI good enough? Is it a destabilizing force to those, systems? Is it a stabilizing force? And also, is that the type of AI that we're talking about even? Do you have any thoughts for those people out there that are kind of coming at it from that perspective of generative AI only and maybe this perspective of it not?

Daniel:

Wondering if it's not good enough for these kinds of contexts.

Bruce:

It depends what kind of contexts there are, right? So people's most common interaction with AI are probably their mapping app on their phone, Google Maps or Apple Maps, and the algorithmic feed of their social media. Those are probably the two places that people interact with AI the most. Notice most people don't realize that as AI. Notice that neither of those are chatting, right?

Bruce:

Neither of those are producing text. They are both predictive AI. You know, which video will keep you on the platform longer is turning left or right likely to get to your destination faster. When people look at chatbots, I think largely their opinions are formed at a moment in time, possibly two years ago. And maybe they use it once and they made a mistake and said, this thing is stupid.

Bruce:

And you know, technologies very broadly are changing all the time. So when you say, is it good enough? The question is compared to what? So is it a non democracy application? I was at a conference in Toronto about a month ago.

Bruce:

It was about medical uses of AI. And this is an experiment with AI in an emergency room. So in an emergency room in Canada, it might be true in The U. S. I don't know.

Bruce:

They have to, after a case comes in, a bleeding person, leg severed, whatever it is, they do whatever they do. They have to write up what they did and pass that on with the patient to wherever they're going next, right? Whatever part of the hospital they're going next. Turns out doctors are terrible at this. They leave stuff out, they make mistakes, they forget things, they do an awful job.

Bruce:

So the hospital experimented with having an AI that passively listened in the emergency room to all of the noise and the chaos and everyone's screaming and talking and saying things and wrote up this after event report. It's probably got a name. And then the doctors would look at it. They would correct it. They would approve it.

Bruce:

It would go on. It was orders of magnitude better. The doctors loved it. And they would say, Oh my God, I forgot that we did do that. It made fewer mistakes.

Bruce:

So there's an example where the AI is way better than a human. There are gonna be other examples where the human is much better. I mean, you probably don't want your AI as a doctor because a human doctor is better. But again, compared to what, you know, fast forward to some parts of this planet where there are no doctors and your choice isn't an AI doctor or a human doctor, your choice is an AI doctor or nobody. And there you might say, well, give me the AI, it's gonna make mistakes.

Bruce:

But you know, my brother is even worse. He'll make more mistakes. He won't know what to do at all. So it's very context dependent and depends on what you're doing. The AI that's feeding you your TikTok feed, I mean, who cares if it makes a mistake?

Bruce:

If it's an AI that's, you know, putting policemen on the street, yeah, mistakes matter more here. So there's never one answer to that. It always depends.

Chris:

It seems to me that like those are great examples that, and they also like tying back to the book, make the point that that democracy is kind of understood as an information system and AI is operating on that system. And you're in as you just pointed out, you know, these capabilities where you're talking about agents and chat capabilities, being able to capture what's happened much better than the humans will and not miss the things. As you're looking at things like, you know, free speech and stuff. How would you apply kind of that you take that emergency room monitoring and apply it with AI agents into the context of free speech? And how does it change it?

Chris:

What does it how does it amplify or tamp down on it? What does it mean for us?

Bruce:

You know, I don't know if it affects it that much. Basically because it's already so bad. You know, we're already living in a country where money equals speech. And the more money you have, the more speech you have, the more power you have. You know, these social media algorithms, even pre AI, you know, very much affected how you were heard and what you heard.

Bruce:

I don't know if AI makes much of a difference, right? Astroturfing, the notion that a company is gonna fake a grassroots movement, way older than AI. I mean, they can use AI to do that, but that is not an AI problem. And a lot of times in our book, we come across this, we come across with problems that AI exacerbates, but doesn't cause. And in all those cases, like we know the solutions.

Bruce:

They're not hard technically, they're just hard politically. And again, depending on the country, you will do different things. So I'm gonna pull example from Germany. Germany has a lot of political parties. That's actually kind of confusing to know what they stand for.

Bruce:

And for decades they have had a system where the government, some government agency would summarize political parties and voters can go to this nonpartisan voter guide and figure out what the parties stand for. Last year, they experimented with a chat bot to do the same thing. So instead of going to a static webpage, you would have an interactive conversation with an AI that would tell you kind of what the party stood for and likely who you support. Younger voters like that. Now there's something where it's not gonna make mistakes.

Bruce:

It's working on a very confined dataset. Now, systems we're learning that if you constrain them, they don't go wildly off script because the script is very narrow. So we are now seeing that, you don't want a massive AI model. You want an AI that's a good travel agent, right? Or a good investment counselor, or, you know, a good something else.

Bruce:

So this is all changing. The hallucinations, the making mistakes, we talk about mistakes and what they mean. And again, it always depends compared to And I say that again and again, to what? Right, AIs aren't great drivers, humans are terrible drivers because we drink alcohol, we get tired, and AIs don't.

Sponsor:

Well, friends, Perplexity builds AI that can answer almost any question. Miro powers visual collaboration for millions and Mixpanel processes billions of analytics events every single day. And none of them use engineers to update their marketing websites. Think about that for a second. The companies pushing the boundaries of what software can do decided the smartest move for their.com was to get engineering out of the loop entirely.

Sponsor:

Well, that's Framer. It's a website builder that works just like your team's favorite design tool, real time collaboration, a robust CMS built for SEO, integrated AB testing, the works. Changes go live in seconds with one click, no ticket, no sprint planning, no, yeah, we'll get to it. And this isn't some startup compromise. We're talking about enterprise grade security, premium hosting, 99.99 uptime SLAs.

Sponsor:

That's four nines. The infrastructure is serious. The workflow is just fast. Learn how you can get more out of your.com from a Framer specialist or get started billing for free today at framer.com/practicalai for 30% off a Framer Pro annual plan. That's framer.com/practicalai for 30% off.

Sponsor:

Framer.com/practicalai. Rules and restrictions may apply.

Daniel:

Well, Bruce, one of the things that I think about immediately when I hear the words, you know, whether it's democracy or citizenship or, and especially sort of AI is is questions. You even, I think, talked about a force multiplier or power multiplier. And AI seems to be kind of having this tendency to, in some ways, centralize power in the sense that if you have the data centers, if you have, you know, access to larger cards or if you have, you know, a certain scale, you can maybe do things. Do you think that's an inevitability of the technology? Do you do you think and and the other element I'm thinking of here is actually the drivers of research on the AI side are are really the corporations, right, the larger tech companies that are driving cutting edge research versus traditionally kind of the academic side.

Daniel:

Not that the academic side is not doing good research, but you see a lot of this good and cutting edge research coming out of industry maybe where there's more access to compute or other things like that. So any any thoughts on this dynamic of this centralization of power, in relation to AI, how that how that affects the question of of democracy and citizenship one way or the other?

Bruce:

So it's a 100% not a function of the tech. It's a function of the way we choose you, tech. It's a function of the market. Mean, this tech does not have to be concentrated in the hands of, you know, five monopolies in The United States. We just chose it that way.

Bruce:

And the monopolies are powerful. And of course in The United States, money equals policy. If you're rich, you get the policy you want. So that is being perpetuated, but it's changing naturally. I mean, deep seek taught us that you don't need the cutting edge chips and all the money to make a competitive core model.

Bruce:

In the book, we advocate something called public AI, which are AI models not built by corporations, not built under the profit motive. And our book, it's largely theoretical. We talk about how this could be done. The neat thing is a few months ago, it happened. Switzerland, ETH Zurich, their supercurious center, funding from the government and other organization, forget, produced a core model, a purchase.

Bruce:

It was not corporate funded at all, not built on the profit motive, no illegally stolen training material, no poorly paid third world labor for fine tuning. It is free to use. You can go online and use it right now. It is competitive to the best models of last year. It's a little bit behind, but you're not gonna notice it.

Bruce:

And here's an example doing it much cheaper. And my guess is we're gonna have 20 of those in a year or so. That I think that the dominance of the big corporations, these hundreds of millions of dollars for core models, we're gonna laugh at that in a few years. It's turned out to be much cheaper. You don't need to spend all this money, all this compute, that you can be smarter.

Bruce:

And especially we're going to need models that are more specific. We're gonna need model that is sort of a good physics teacher. We're gonna need a model, right? That is a good, you know, restaurant chooser for me, right? You know, something that will be, you know, my agent.

Bruce:

And you know, my Butler model is gonna call any of these dozen or two dozen special models, any of what we want. And in this world, the Claude's and the GPT's and sort of all these massive models become archaic. So I don't know if this is true, that's my guess, but there's nothing in the tech. These are corporate decisions. Just like the models don't have to be overconfident.

Bruce:

They don't have to be obsequious. They can say, I don't know. I mean, the tech allows them to say, I don't know. The tech doesn't require them to produce, you know, child porn on demand. I mean, those are choices by corporations.

Bruce:

And, you know, we can make other choices. It's hard that we're living in a world where corporations run the planet, but it is technically possible.

Chris:

I'm curious. I think that's a a really interesting idea. And I think it's one of those things where I think we have a bias where we automatically think of the good option as being open source, but open source and open weight models are still, you know, trained on corporate dollars and trained on corporate information access and stuff like, you know, as you just pointed out. And so like, as you have planted the seed in my own mind of kinda going what if? Can you talk a little bit?

Chris:

I'm I'm I'm what's playing in the back of my mind is, like, how would such an ecosystem come to fruition, especially in the political climate that we have right now, especially, you know, there are many places in the world. But as we sit here in The United States, and the fact that there is forces of influence being applied in all sorts of different ways. How would you create such an ecosystem to kind of equal what the Swiss have done in The United States or a similar nation that doesn't get those fingers of power being applied to them? You know, it may not be corporate dollars, but there are so many other ways. How do you envision that coming about?

Chris:

I think it's a wonderful dream, but I am, I will confess I'm making a struggle on seeing how do we get from here to there.

Bruce:

Yeah, The US doesn't do this, of course not. I mean, we can't even fund, you know, government weather service anymore. I mean, The US is done. Just assume that US government will just decay and not do any of this. That probably couldn't before, just because we believe corporations should do all the things, Other countries don't have that bias.

Bruce:

But you and The United States could use the Swiss model. Right? So I mean, the ecosystem is gonna come from the ground up. You know, I know people who are using DeepSeek on their phones. It's not fast, but it's on their phones.

Bruce:

So I think the environmental concerns will largely disappear as these models get smaller and easier to run. We saw this with search. Search used to be environmentally expensive. Now it's very cheap. I think The U.

Bruce:

Will be dominated by the tech giants, by the monopolies. That's not gonna go away, but other countries are trying to divest themselves. Europe's trying to build an entire tech stack. Singapore is working on a public model because Southeast Asian languages get short shift in these American models. They have a model called Sea Lion, Southeast Asian languages, something, something that is optimized for their region.

Bruce:

And that's what they're doing. France built a model. I mentioned it before that is optimized on French law. Taiwan's having a problem because the models in Chinese largely are trained on Chinese data, which is often translated from Russian. And like they use the wrong words for democracy in these models.

Bruce:

So a lot of politics here. You go on hugging face right now, there are two and a half million public domain models. Right now, lot of them are, you know, people who are personalizing LAMA and a lot of them are small and toy models, but there's good stuff there and that's gonna continue. So to me, the hope is this happens from the ground up, that as this becomes cheaper, more organizations can do this. Switzerland is not a big country.

Bruce:

You know, I am spending the year in the University of Toronto. I'm normally at Harvard. Canada could do this. They pledged a couple of billion for AI infrastructure. Now they could build their own model.

Bruce:

OpenAI is offering to house an instance of their model here. Say no to that. You can build a Canadian model. You don't have to rely on, right, LAMA or, know, any of The US Claude or any of The US models. So I don't know, but my hope is this happens naturally as things get cheaper.

Bruce:

The normal way I interact with these models is perplexity. And that just gives me a choice of models. Seven or eight I can choose from. I can imagine there being a 100 I can choose from. I can imagine there being a perplexity scheduler that looks at my query and decides, you know, which model is best for me.

Bruce:

I mean, there are some I've paid for and we can make up how this works. Apple, you know, is trying to figure out how to put models on people's devices. I mean, they are the privacy preserving of the big tech monopolies, right? I mean, sort of unique of all of them, they don't make money spying on you. They make money to sell you overpriced electronics.

Bruce:

So they want something for that overpriced electronics to do, and putting the AI model on the object will be that. So I think a lot of things are happening. We just had a few years where only a few could do it because it was very expensive. And I think that is naturally gonna change.

Daniel:

And I'm wondering how much, from your perspective, as you've seen the industry evolve, we talked about one level of evolution of the industry, which is the model, right? Which, is evolving in the ways that you've mentioned and, you know, different types of models are coming out and that sort of thing. But there's this element outside of the model as we've seen these kind of agentic systems develop, which are like all the things around the periphery of of the model, which, could be data sources, right, which, brings up a kind of access to data element. It could be other systems, right? Like I wanna you know, I I want to analyze my my tax return, and so I need some access to some, you know, maybe IRS system or something like that.

Daniel:

And then there's interactions with maybe non AI computer code or or rule based systems that are tied into these agentic things. How do you see that element developing? Because there's a kind of this, infrastructure or access side of things, which if you have access to a good model, it might not mean that you have access to the right systems that kind of are their own kind of force multiplier to the model.

Bruce:

So I think there's a lot here. I mean, you know that there's a lot of software between what you type in the chat window and what the model receives and a lot more software between what the model produces and what you see. And all of that middleware is where alignment rules. I mean, a lot of things get applied where the system figures out what the person wants. You know, AI doesn't get math wrong because the math is grabbed outside the AI and just done.

Bruce:

So the AI doesn't mess that up. So I think there's gonna be a lot there. Access really feels important here. I think we're gonna have haves and have nots. People have access to good models and people who don't.

Bruce:

But you know, remember that a lot of AI is gonna be thrust on people. They're not gonna be choosing to use it. I mean, just like you go on Facebook and only choose to use AI, you're forced to use AI. Microsoft is doing its best to force you to use AI whenever you use a Microsoft app. Google, you know, when I do a Google search, I get an AI answer whether I want it or not.

Bruce:

There's no way to turn it off. And, you know, lots of AI is predictive AI. We are seeing that health insurers are now using AI to approve or deny claims. I mean, I didn't choose that. I didn't decide that, but that is happening.

Bruce:

So I think this is complex ecosystem. I think the AI interacting with the non AI parts of the system are really important. Who has access to what is really important. You know, I think AI in medicine will be really interesting. Largely you won't get to touch it, but that'll be researchers using AI.

Bruce:

We had an AI win a Nobel Prize last year, basically for protein folding, right? Not a chatbot. This is an AI that is really solving complex math problems. And, you know, this has to do with the fact that an AI can keep more variables in mind than a human. It can do more complex things.

Bruce:

So might not be better, but it just might be, you know, do more complex things. Other examples, AI is now looking at your email and finding spam, right? And now it turns out trained humans are way better at that than AIs. But if you get a million emails a second, trained human is not an option. So there is another sort of system where AI is gonna work even though it's not better than a person.

Bruce:

And that only works in conjunction with your AI program. I think you're gonna see this AI in various forms integrated in all sorts of systems and all sorts of ways. But it's really important, I think for anybody listening, is think of it as more than chatbots. Chatbots make the news, but it's the non chat AI that I think, you know, does more things.

Sponsor:

So here's the thing about AI strategy. Everyone has one. Decks get presented, pilots get proposed, and then nothing ships. Meanwhile, 300,000,000 AI tasks have already been automated through Zapier, not talked about, automated, actually running right now. That's the gap Zapier closes.

Sponsor:

It's an AI orchestration platform that connects models like ChatGPT and Claw to the tools that your team already uses so you can use AI exactly where you need it. AI powered workflows, autonomous agents, customer chatbots, whatever you're trying to build, you can orchestrate it with Zapier. And here's the part that might surprise you. Zapier is for everyone. Tech expert, developer or not, you don't need to know ML, AI, or be an engineer to wire up workflows that actually work.

Sponsor:

That's kind of the point. Personally, what I like most about Zapier is that it's automation infrastructure. I don't have to babysit it. There's no worrying about uptime. There's no managing servers.

Sponsor:

I create Zaps as I find a need, test them, and then walk away and get back to doing what I do. That's the dream. Right? So if you're tired of talking about AI and ready to put it to work for real, Zapier is how you break the hype cycle. Join millions of businesses transforming how they work with Zapier and AI.

Sponsor:

Get started for free today by visiting zapier.com/practicalai. That's zapier.com/practicalai.

Daniel:

Well, Bruce, you you talked about in the book and the research and as you you develop this, looking across what is happening in different things that intersect with democracy, whether that be the courts or government agencies or that sort of thing, political campaigns, and across the world, I'm wondering if on the kind of positive side of things, in terms of the change that we might be about to see, within, government, you know, policy, democracy. What what are some tangible examples that stand out to you, either that are aspirational or that you've actually seen and and researched that are ways that could be that force multiplier on that positive side of things, the side of things where I am pro democracy, I'm gonna use this as multiplier in that sense. Maybe just a couple of standouts from your perspective.

Bruce:

I'll give a start with two examples. One is from Japan. So there was a, God basically a kid, Takahiro Anno. And a couple of years ago, he was running for the mayor of Tokyo. He's kind of a young engineer.

Bruce:

He comes in fifth out of 50, crazy. And what he did was he built an AI avatar of himself that answered questions on YouTube. That's how he became known. I mean, a really neat idea of using AI to interact with the voters. That would be a weird anecdote in history.

Bruce:

But last year he won an election and he is now a member of their upper chamber of their legislator. He has a new political party called Team Mirai, which is like team future. And he is using AI to interact with his constituents, that they can discuss legislation, priorities, stuff is synthesized. He learns about it. He talks to his constituents.

Bruce:

He is building tech tools for all of the Japanese parliament use AI to interact with voters. It's an amazing story of AI being used to make democracy better. That's one. Second story, I'm gonna go to California. There is a group called Cal Matters and they are a political watchdog organization.

Bruce:

You can find them on the internet. And what they do is they collect every public utterance of California elected officials, Every floor speech, every campaign email, every tweet, everything. And they make it available. And you can search to find out what your politician is saying. Something they added last year was an AI feature called tip sheet.

Bruce:

What the AI does, is it goes through all that information, oh, I'm sorry, which also includes the voting records and who pays them, the campaign contributions, and finds anomalies, things that are weird. But it doesn't publish them. It makes them available for human journalists to look at. So a journalist can go to this tip sheet. It's not available normally on the website.

Bruce:

You can't go to it. I can't go to it. Journalists can. And they find things the AI says, Hey, look at this. And then the human researches the story and sees that there's a story there.

Bruce:

A really great way that AI is assisting human journalism. So I'm gonna do actually one more, I'm gonna do the third. This is gonna be from Brazil. Brazil is an incredibly litigious society, even more so than The United States. The country spends like 1% of its GDP paying litigation against the government.

Bruce:

The courts have a couple of years ago started using AI, not to make decisions, but to manage the courts, to assign judges to cases, to move documents along, to do all of that stuff. It turns out it made the courts much more efficient. Used to be take two or three years to get judgments. Now it's faster. The flip side of that is that the attorneys are using AI to file more cases.

Bruce:

So cases are being dealt with more efficiently, but more cases are being filed. But still, that's a story of more justice, more democracy. So there's just three. Do, I mean, we'd mentioned Germany, mentioned France, Chile, Taiwan, lots of examples from around the world.

Chris:

As I'm listening to you, those are very inspirational. I really like those examples. One of the things I'd like to do for a moment is to take us slightly out of the book. And over your career, you've had like, I think I mentioned to you in the in the pre show communications that I had learned cryptography from you. And I think of you as someone who can provide great guidance.

Chris:

And I wanna put you kind of in the moment for and and as we're looking at this world where AI is just like, you know, we have so many agents and so many capabilities. We've been talking about some of those. And in a lot of cases, AI is kind of applied through coercion by using products, that you may not want it, but it's nonetheless there. In other cases, we have huge parts of populations, and this isn't just a US thing. This is a global thing where you have the haves and have nots.

Chris:

You have people that are rejecting AI. It's impacting jobs, and therefore livelihoods. And you have the power concentration that we talked about earlier in the show. As as you look not to people maybe like me and Dan, who are, you know, steeped in AI everything all the time, but people out there that are trying to navigate this really rapid evolution out there and and they may express that in various ways. They may express it politically in certain ways and but there there's a lot of anger, there's a lot of resentment, there's a lot of shame in terms of the world is leaving me behind kind of notion.

Chris:

What can people in those situations do to start how how do they start thinking about AI if they wanna move from where they're at, which feels like they're left behind to a world where they can still integrate into it even with these changes evolving? You know, think I think it's one of the big questions of the time and so if you if you wanna take a swag, I'd love that.

Bruce:

I don't wanna be overly optimistic. This is gonna be bad.

Chris:

Okay.

Bruce:

And this is gonna be on par with the industrial revolution. Careers are gonna disappear. And it's not, it's gonna be highly paid careers. Like all the apprentice professions, doctor, accountant, lawyer, architect, investment advisor. I mean, those are all predicated on you go to school, learn how to do the thing, be a junior doer of the thing and work your way up to a senior doer of the thing.

Bruce:

But if all the junior doers of the thing are AI, how does that pyramid even work anymore? Nobody knows. So I think this will be extraordinarily disruptive and it'll be, you know, jobs we don't expect. It'll be like, we only need 10% of the lawyers now. So what are all the law schools do?

Bruce:

They're churning out a lot of lawyers. So I don't wanna minimize this. I think this is really going to be disruptive. And that's why people who are actually thinking are thinking about, you know, ways to untether life from employment. Whether it's something critical like a universal basic income or some no brainer like, you know, tying your health coverage to your citizenship rather than to your employment.

Bruce:

A lot of things are gonna have to change. Mean, I what can individuals do? I mean, it depends who the individual is. And yeah, we're gonna have to really think about job retraining at a massive level. We might be living in a world where we just don't need people to work in the same way that we used to.

Bruce:

It used to be, you needed to work to survive. Now maybe you don't. And this gets back to some kind of UBI, right? If the machines are doing all the work, you know, why are the, you know, 1,000 billionaires in country benefiting and not everybody else? So I don't know.

Bruce:

I mean, are questions way bigger than me and I don't have a good handle. Some of it is we don't know how much this will happen, how fast it'll happen. I mean, right now as Corey Doctorow points out, you know, it's not that AI can do your job. It's that AI can convince your boss that it can do your job. So AI fires you, your job gets done lousy and now no one knows what to do.

Bruce:

That's gonna happen. So separate out the tech from the politics. You know, we live in a world with huge and community qualities, you know, in ways that make society unstable. And this is gonna exacerbate that. So again, we're in a world where AI doesn't cause the problems, but AI takes our existing problems and makes them worse.

Bruce:

We have to solve our existing problems. And then we have to get to climate change, right? This is like, these are the precursors to solving the actual planetary problems. It's nutty that we can't solve sort of the humanity problems because we're stuck with democracy collapsing around the globe and income inequality running amok, making the ability to solve these, solve the other problems sort of untenable.

Chris:

Yeah. Throughout all of human history, civilization has been built around the basic premise that we measure human value by productivity in some form or fashion.

Bruce:

You know, but not really, that's very much a, you know, European Protestant post US revolution way of thinking. I mean, is right. There's lots of other ways we can do it. It's just that the money won and we in The United States in the, you know, the mid twenty first century can't conceive anything else. But any attempt to go from here to any place else is gonna be fought by the money.

Bruce:

I mean, this is sort of an, I think a much bigger, but a more inherent problem that you don't get change at the societal level ever in history without serious bloodshed, because those in power like power and wanna keep power. Hoping it'll be different, don't know.

Chris:

I think that's kind of what I was getting at was the notion that that has to change in terms of like going forward on that.

Daniel:

I guess like out of that spirit of sort of what needs to change, what are the problems that we need to address? If we hone in kind of as we come to a close here, we have a lot of listeners that are builders, practitioners in the industry in one way or another, whether that be a software developer or an AI person or whoever, how would you kind of leave us as we close out here in terms of the things that we are building, the way that we are contributing to this industry and shaping this industry. What would you sort of leave us thinking about, I guess, as we kind of come into this new year and, obviously are participating in this rapidly changing environment?

Bruce:

Now we have a lot of power. You know, go back fifteen, twenty years ago, tech workers in general had a lot of power, had a lot of power over the companies they worked for, because they could always leave and they'd get a new job in ten minutes. And so what they said, what they wanted to work on, what they refused to work on matter. That has changed largely in big tech. I mean, there's a lot of layoffs in these tech companies, a lot of people looking for jobs.

Bruce:

It is no longer a seller's market in the way it was, but it is still an AI. People who are AI researchers, AI engineers, have a lot of power inside the corporations. And I want us to be more of a moral compass. I want us to say, no, we're not gonna do this. I mean, if you remember, however many years ago when Google employees staged a walkout over a project they did at the Department of Defense.

Bruce:

And that's kind of the peak of tech worker power.

Daniel:

Maven.

Bruce:

Yeah, project Maven, that's right. So I want us to do that. I want us to be more involved in the effects of what we do. It's hard, and I know that, I mean, this is very general technology. I said in the beginning, this is power enhancing technology, that the technology doesn't know what power it's enhancing, doesn't have a moral compass.

Bruce:

It will do more of what you tell it to do, and you're gonna be a good person or an evil person, and it'll do more of whatever it is. But you know, these changes are coming. I think they're gonna come, you know, both slowly and then quickly. You know, right now we're at kind of a plateau, right? The new models aren't much better than the last year's models.

Bruce:

We're not seeing improvement in ways that matter in these large language models. I think we're still seeing a lots of improvement in the predictive models, in the non generative models. But there are all sorts of new paradigms being researched. It's unlikely that transformers are like the pinnacle of the AI data structure from now until the end of time. That seems, you know, implausible, right?

Bruce:

So there will be other things. You know, let's try to make them benefit humanity rather than, you know, a bunch of, you know, white male tech billionaires in Silicon Valley.

Daniel:

Well, really appreciate you kind of guiding us to those thoughts at the end here. I know that this has been extremely interesting discussion for me. Appreciate you putting in the work on this book and exploring these ideas, doing the research, and taking time, out of that research to to have this conversation with us. It's it's been a pleasure and really appreciate you joining.

Bruce:

Well, you know, it's no fun doing research if nobody reads it or listens to it or pays attention to it. So that that that's why we have these conversations.

Daniel:

Yeah. Well, thank you, Bruce. Look forward to having you back on the show in the future. Thank you very much.

Bruce:

Excellent. Thank you.

Jerod:

Alright. That's our show for this week. If you haven't checked out our website, head to practicalai.fm and be sure to connect with us on LinkedIn, X, or Blue Sky. You'll see us posting insights related to the latest AI developments, and we would love for you to join the conversation. Thanks to our partner Prediction Guard for providing operational support for the show.

Jerod:

Check them out at predictionguard.com. Also, thanks to Breakmaster Cylinder for the beats and to you for listening. That's all for now, but you'll hear from us again next week.