Practical AI

U.S. Congressman Don Beyer returns to Practical AI for another far-reaching conversation with Chris about many of the most important AI challenges facing America and the world.  Blending political savvy and statesmanship with his unique technical understanding as an active Ph.D student in AI at George Mason University (making him the coolest member of Congress!), the congressman shares his perspective about the really hard AI concerns that you would have asked him yourself.  Together, Congressman Beyer and Chris explore AI regulation, cybersecurity concerns sparked by advanced models like Mythos, bipartisan AI governance efforts, and the growing AI race between the U.S. and China. They fearlessly dived headfirst into AI-driven job displacement, mass surveillance, autonomous weapons, existential risk, and the philosophical questions surrounding consciousness and superintelligence as AI continues to accelerate.  This is an unusual and insightful conversation you don't want to miss!

Congressman Beyer was previously on Practical AI episode 271 on May 29, 2024:
AI in the U.S. Congress

Featuring:
Upcoming Events: 

Creators and Guests

Host
Chris Benson
Cohost @ Practical AI Podcast • AI / Autonomy Research Engineer @ Lockheed Martin
Guest
Don Beyer

What is Practical AI?

Making artificial intelligence practical, productive & accessible to everyone. Practical AI is a show in which technology professionals, business people, students, enthusiasts, and expert guests engage in lively discussions about Artificial Intelligence and related topics (Machine Learning, Deep Learning, Neural Networks, GANs, MLOps, AIOps, LLMs & more).

The focus is on productive implementations and real-world scenarios that are accessible to everyone. If you want to keep up with the latest advances in AI, while keeping one foot in the real world, then this is the show for you!

Narrator:

Welcome to the Practical AI Podcast, where we break down the real world applications of artificial intelligence and how it's shaping the way we live, work, and create. Our goal is to help make AI technology practical, productive, and accessible to everyone. Whether you're a developer, business leader, or just curious about the tech behind the buzz, you're in the right place. Be sure to connect with us on LinkedIn, X, or Blue Sky to stay up to date with episode drops, behind the scenes content, and AI insights. You can learn more at practicalai.fm.

Narrator:

Now onto the show.

Chris:

Welcome to another episode of the Practical AI Podcast. I'm Chris Benson. I am a principal AI and autonomy engineer. And today we have a special guest who has been a previous guest from a couple of years ago. I wanna introduce, if you haven't already seen the episode or recognize him upfront, this is Congressman Don Beyer of Virginia who is, in addition to being a congressman, has an incredible background in AI, which is obviously why we're having him on this particular show today.

Chris:

Welcome back to the show. It's great to have you.

Don:

Chris, thank you. I'm flattered that you invited me back for a second time.

Chris:

Well, the first time was very inspirational. I know it's not the primary topic, but, like, I I remember one of the things that really had an effect on me was you were in a PhD program at George Mason University, NAI, and I would bet that most members of Congress don't don't delve into such things. And so whether you like it or not, I think that makes you the coolest member of Congress, period, the fact that you're that you're doing that. So thanks for coming on the show to talk a bit about the world of AI and how touches you and your primary job.

Don:

Yeah, thank you. It's really fun. I'm spoiled because I live so close to the capital, and Northern Virginia is right across the river, so I don't have to be on an airplane for eight or ten hours a week like most of my fellow congresspeople do. Fair enough. I just thought it was good.

Chris:

Fair enough. You got those extra few hours to work on that PhD program there. And we got a lot of feedback from that when we were on a couple of years ago, really positive. So, anyway, welcome back. You know, the landscape of the world has changed dramatically since the last time we talked to you.

Chris:

We're we have a new a new administration that's that's in versus president Biden was in back when we talked. We now have president Trump. We were talking about a whole set of public policies that were that were in kind of being developed at the time, and I know that that has changed. This administration has thrown out a lot of the work that had been done prior and and kind of gone their own way. I was wondering if you could kinda start out by just kind of laying the landscape as you look at it as an AI expert who is in Congress.

Chris:

How has the world changed from your perspective? What's the same? What's different? And how that changed how you're looking at things and acting upon them?

Don:

Well, Chris, almost nothing's the same, just because AI is accelerating so very, very quickly. In terms of the Trump administration itself, it's sort of a mixed bag. On the one hand, the new president threw out Joe Biden's executive order on AI, which was, incidentally the largest executive order ever written, by a president. But he, but then he instituted his own, which was largely the same. Probably the most important thing for me, and I think for most people on the planet, is that Donald Trump saved the safety institute out at NIST, the National Institute for Standards of Technology.

Don:

He renamed it, it's called Casey, kept a lot of the same people, changed the leadership, but that's normal. So at least there is some safety perspective within the current administration. He brought in David Sachs to be the AI czar and Mike Kracios to be chief science advisor. That was interesting because it basically brought in two business people rather than scientists who made a lot of money, both of them in Silicon Valley, doing that. That was different from the scientists that he had before.

Don:

You know, the for example, the head of OSTP was a distinguished scientist right out of Stanford. The there was a push in the first year or so of the Trump administration towards, you know, full blow accelerationism. You know? No new laws. No restraint.

Don:

From the defense perspective or the intel perspective, it's like, we have to beat China. But it was also mixed because then at the same time, Trump decided to sell a bunch of the, you know, the h 200 ships to China, which was opposite of what Biden did. You know, Biden tried to restrict China's growth in AI by withholding the best NVIDIA chips, and Trump reversed that for for other reasons. So it's it's a complicated scenario. And then with the anthropic battles over the use of anthropic in Iran, even more so with Mythos' introduction a few weeks ago, it actually didn't come to you and I, but it was to me, but it was laid out what was happening with it.

Don:

All of a sudden there was this wake up call within the administration that AI is progressing so quickly, it could endanger all of the cybersecurity measures that American companies and American government put in place over the decades, and that they really have to pay attention to the security and the safety sides of artificial intelligence.

Chris:

Yeah. It it's Methos in particular, you know, with with kind of the the shock. It was interesting. There there was the you know, the administration was very much in battle with Anthropic for a little while. Then Methos came out.

Chris:

They seem to be backing away from that slowly. At least that's how it it seems coming across in terms of the the comments out of the White House. I'm kind of curious, you know, from from more of the AI perspective on this, you know, as we have seen each advancing model and supporting technologies, you know, such as the various Agentic harnesses and stuff like that come out, once something is out, you continue to have development, you know, from other companies and stuff like that. How do you think the with Methos coming out and the scramble to address safety concerns that it brings, but the recognition that you're probably gonna have other models that are able to have similar capability evolving over time from various companies, whether they're domestic or foreign, how is that you know, how are are are folks thinking about such things, about the evolution from mythos on? What is the safety picture there?

Don:

Yeah, Chris, I think very much so. You know, I had been in the more comfortable position of thinking that all of our protections against cybersecurity inclusions, have your password protection large with multiple layers of protection, that we're gonna be safe until quantum computing came. But no, one of the things that Methos did to look deeply into how the protection efforts were created and began to unravel them, unfold them right away. And yeah, sure, Anthropic can do it, you got to figure OpenAI and Gemini and stuff are close behind the Chinese tool. Thought that Anthropic was very responsible in giving the code to a handful of people to anticipate how we were gonna have to strengthen our security devices ahead of any widespread dissemination of the Methos software.

Don:

Hopefully, can get a running start before the other people catch up. It's like any arms race. It's not going away. This is gonna be step by step accelerating for the indefinite future.

Chris:

Yeah, I mean, just that, and there are so many topics to hit here, but with Methos and as other competing capabilities come out, and some may just be released without any, you know. There's a point here where Anthropix talked about kinda holding it back for a while, but I think the presumption is that Methos will be generally available at some point and likely competing models. So it kind of seems to be changing. I know coming from the AI, you know, where AI merges with the software development world, it's definitely changed how everyone's looking at their cyber position.

Don:

Oh, absolutely. And it would not be surprising to me, Chris, if there is a wholesale rethinking of how cybersecurity works. If we look and say that the tools we've used are not gonna be effective anymore, we step back two or three or four steps and think about what do we need to protect? Even with the question of what do we need to protect? Do we choose to protect much less, and do we protect it in very different ways?

Don:

It's phase shift. Really big picture.

Chris:

Absolutely. So as we step back to kind of the notion of government, we're seeing a lot of conversation around regulation. There is an ongoing debate about where regulation should occur, whether it's at the federal level or at the state level here in The United States, and and, you know, a little bit of a struggle. Can you talk a little bit about how you see that landscape, in terms of regulation? Who should be doing what at what levels of government?

Chris:

And how you know, what what is a sensible approach to that, and are we are we doing that or not?

Don:

Well, I I think in the largest picture, the sensible approach is a new Geneva Convention where we get together with the Chinese and the Europeans and the people in The Middle East and anyone who's doing important work and trying to figure out guardrails that work for everyone. As people have fairly pointed out, if we have this beautiful regulatory system in The United States, and China has none, that's not gonna work in the long run. In the meantime, we have a more continental concern, and that's should the federal government do the regulation, or can state and local governments do it? And I think I'm very sympathetic to the argument, probably best made by my friend Jay Obernolte, who's a Republican Congressman from California, that we shouldn't have the tower of battle with Virginia's regulations be different from California's being different from Texas's. However, at the federal level right now, we've basically done just one bill, and that was Ted Cruz's Take It Down Act, which gave us the ability to when someone puts evil sexual imagery of you, Chris, up on Facebook, you can

Chris:

Don't horrify the audience with that. Oh my gosh.

Don:

You can demand it takes it down, and maybe even have a cause of action to see whoever put it up. That's a good thing. Only one that we've done. And you're familiar with the bipartisan task force that Mike Johnson and Hakim Jeffries had. We had 80 specific legislative recommendations, and so far we've done one of them.

Don:

We look at Congress's total inability to do anything on social media over these last two and a half decades and say, yes, we need the national framework, but in the absence of one, we should not restrict state and local governments from doing the best they can. And there are interesting things. HB 53 in California, I believe that the governor vetoed it or amended it, but it became law, is an important first step in understanding how to regulate the artificial intelligence. There's a guy named Alex Boris in New York, a member of their Albany assembly, who again is out there trying to think of really important ways where a state can make a difference. That shouldn't be the end game, but it's probably a good place to start.

Don:

The three way lies is that state governments are laboratories of democracy. They can move much more quickly. They don't have filibusters and things like that, and then maybe we can learn from, I guess, over 700 pieces of state legislation on AI are out there right now to build where we go. I hope it doesn't take ten years. Maybe it should take two or three years.

Don:

What it will take, Chris, is an administration that wants to press forward with meaningful light touch regulation at the federal level. That's not come from President Trump and this administration yet. We do a lot of stuff on not taxing tips or overtime, but nothing on AI regulation in Congress.

Chris:

Yeah. So is I'm I'm kinda curious. It raises a question. Is there like, I I'm failing to see it. Is there something partisan inherently in AI?

Chris:

And this is where acknowledging I am not a political figure and that's not where where I'm spending my my thinking time, but is it perceived as a as a partisan topic in general and, you know, in in the large?

Don:

I think there's a danger that it does it that a little way. You know, our our little task force was completely bipartisan. We're trying to do everything as bipartisan as we can because this really affects every person. It should be much more like, you know, our defense posture, which has typically been very bipartisan, our our foreign policy posture. The one thing that complicates it, Chris, is that typically Democrats have been more inclined to regulate and Republicans have been much more inclined to deregulate.

Don:

And so when you use the word regulation or even the idea of putting restrictions around what artificial intelligence can do, can be, how it's used, it's gonna stir a little bit of that D versus R. We have to do what best we can to overcome

Chris:

Gotcha. It's could you talk a little bit about, Lee, when you talked about some of the we out here, you know, that that are not in government and not, you know, the you know, we we see it on the TV. We see it on the on the browser and stuff, but, you know, I think we hear so much divisive language, and you talked a little bit about the ability of Democrats and Republicans to kinda come together and get, and try to, you know, try to agree on topics. You talked about foreign policy. Within AI, what what, if any, are some of the the strengths of of you know, you see both sides of the aisle working together.

Chris:

Are there any, first of all? And and is that something that that is working at any level? I recognize that the current administration kinda wants to go their own way in a in a variety of topics, but is there any silver lining there?

Don:

I think so, Chris. I certainly want there to be. Let's just think, for example, one of the big concerns with AI is surveillance. This came to the fore, especially when Doge came in and copied a bunch of social security and tax records, loaded them, we think loaded them into Grock, Elon Musk's thing. I know my Republican friends well.

Don:

I'm a Democrat for those who didn't know. That the last thing they want is a central government that knows everything about them. I mean, that's the reason my Republican colleagues have not wanted gun registration over all these years, because that means that the government then knows exactly who owns what gun, and they can take it from you. I don't think they want the government to know everything about us, you know, our habits, book we read, what time we go to bed, you know, tracking our location devices in our cars or our phones. So I think there's agreement there.

Don:

Certainly there's agreement on the sexual imagery, the misuse of generative AI visually, etcetera. And then I think where there's probably the greatest concern right now is what about the job displacement? We're all familiar with Dario Amadeus predictions about 25%, 50% white collar job displacement in the next two to five years. Know, the different numbers are out there, but we all know that this is not gonna be at the speed of the agricultural revolution or the industrial revolution, which took place over decades or a century. This could be two to five years.

Don:

I had a speech opportunity this morning with the enrolled agents of America, all the folks that do our taxes, CPAs and small accounting firms, and it was very relevant to them that all of a sudden, if all those functions like accounts payable and accounts receivable and payroll production are all identic AI handled, what does that do to, not necessarily to their jobs, to the people who work for them? And just extend that through the entire 18,000,000 white collar workers we have in America. Once again, we did a terrible job of adapting to the job dislocation in the manufacturing sector that came both from trade and even more from technology. So you have all these wiped out former manufacturing towns, especially in the Midwest, but around the country. Our so called trade adjustment assistants didn't do a very good job at finding them new ways to be productive.

Don:

They had the dignity of work, and that's a big challenge that both Dems and Republicans are facing.

Chris:

And what's the thinking? I mean, that is a topic, and we talk about it on this podcast all the time in terms of concerns over jobs. We have holidays, and come Thanksgiving, Christmas time, when we're having extended family around, and and none of which in my extended family are AI people other than myself, but that is certainly the topic that that, you know, everyone is worried about, and and within our family, we have an array of different jobs that people are in. Some are blue collar, some are white collar, is, I guess, one of the things, kinda channeling some of the questions that I get that I cannot answer from my own family is, is, you know, is Congress, is government at large, is it thinking much about these problems? And where does regulation fit into this?

Chris:

Or things other than explicit regulation, you know, how how are we considering that there is a worry at some level of this being a major issue for for many families going forward, kind of where are we at on that? Where is Congress at on that? Could you share any of your thinking or your perceptions about that?

Don:

First of all, there are initiatives in Congress. Mark Warner and a fellow Republican, I'm not sure who, maybe Tom Tillis, in the Senate. I'm the co lead with I can't remember who in the house on a bill for a commission on the on the future of the economy, specifically based on this one question. What do we do if AI displaces massive amounts of federal workers? And I don't think that the tendency is towards regulation.

Don:

You hear people say, well, we should just say you can't use this AI technology to eliminate this job. That's probably not even plausible. Instead, we're like, what are the investments we have to make to make sure that people still have, first of all, a means to live, and then second of all, and not unimportantly, something meaningful to do with their dates? The great optimist, and I am a major A optimist, can foresee a world with extraordinary abundance. Nick Bostrom's latest book on an AI Utopia is worth reading, that basically if economics is the science of the allocation of scarce resources, we have a lot of things all of a sudden that may not be scarce.

Don:

First of all, just look at clothing. This is as unscarce as it's ever been. You go back two centuries and everyone wore the same set of clothes year round. With all the energy, the fusion plants that are being built in America right now by Helion, by Commonwealth Fusion, the 44 companies that are racing to be the first. You know, while we're still young, energy can be abundant, ubiquitous, and low price.

Don:

So what's gonna be scarce? Where that scarcity is, is where humanity will probably go. Obviously, it could be teachers, it could be care workers, it could be everything much more bespoke than it is right now. But then again, it may be that there's only a subset of the American people that fit into those new high touch human relationship type jobs, and then what do you do? As you know, some of the AI czars talk about UBI, which works well in Alaska, but in general, I think most people don't wanna be paid to do nothing.

Don:

An interesting conversation with Jeff Hinton last week, who suggested if there really is that much abundance, let's just start with universal healthcare, that free universal healthcare. That takes one worry off of most people's plates and still leaves lots of room for them to work and be productive in other ways.

Chris:

It would. I think I get the sense along this topic. Certainly, I know a lot of our audience does software development and other other AI tangential jobs, and I think, you know, with the when when Opus came out from Anthropic late last year, at the 4.5 level, I guess, in late November, and it and Claude Code was out at the same time through last year and gaining steam, is I know that there's a there was a perception that we that we were kind of experiencing and living and that the 2025 way of writing software, which was very human centric, may have may have AI assistance, you know, through various agents that were there. But writing software in 2026 has been a different experience. I think most people have have been recognizing that that kind of that almost a pair part you know, a pair programming paradigm with your AI model would do that.

Chris:

And I think aside from the technical aspect of that, I think there was a really strong psychological impact of this thing that we have been worried about or that we've talked about for some time has actually arrived, and we're having to change our behaviors and change how we approach our own careers to accommodate. That's obviously only one white collar job out of many, many that can be affected and stuff, but I know there was, in software development circles, there has been quite a lot of conversation around that in terms of upskilling. I think, and ironically, I think that that's an area where people can upskill fairly easily. If they were willing to get into software development, they probably can upskill well. What about when we talk about jobs where people may struggle a little bit for various reasons with upskilling?

Chris:

Maybe it's the level of education they currently have or whatever, and they need to step up into that. Is there any thoughts around, you know, we talked about universal basic income and stuff like that, but just the idea of being able to to change something that you've been set in in your ways for a long time and get to a new reality as people are adjusting to this this rapid AI innovation that's occurring. Any thoughts around that? And I say that as, you know, someone who who, you know, jumped into a PhD program yourself, you know, you're upskilling your own skills. Any guidance?

Chris:

Any suggestions that people might take?

Don:

Well, a couple of threads, Chris. Is I think that as a society, we will be much richer. I really do believe in the abundance. I think one of the challenges we have, and I don't have an easy solution, is what we're already seeing as the concentration of resources in some segments. The rich are getting very, very rich, all the billionaires.

Don:

Indeed. And then we have a lot of people left behind. I don't wanna project in this podcast how we redistribute the income. Income redistribution comes with enormous social problems. We can't leave two thirds of the people behind in this.

Don:

Everyone has to be able to share in the abundance that's created by artificial intelligence. Then beyond that, the fact that the scarce resource, again, is gonna be services rather than things. We're all gonna have enough things, is that we don't have to be burdened down by roots and location. If you're in Johnstown, Ohio, and the steel plant closes or the automobile plant closes, it's tough to move. You live there all your life.

Don:

You own your home there. Your family is there. You can't just pick up and say, okay, I'm moving to Charlotte to get a new job, which is why the left behind places have suffered so much. When you don't have to move because things are much more relational and even information based, it may well be that we can see growth in non urban America, suburban, small town, middle sized town America based on this, which I think would be a very good thing. In fact, even in Virginia, we're finding most of our growth in population is happening in rural Virginia, and that's being made possible by electronic communication, by all the communication systems that are out there.

Don:

A lot of this is post COVID. Look, forty percent of Americans work from home now, Chris. That's very different. Yeah.

Chris:

Case in point for me, certainly, most of the time. So, you know, it's very optimistic, and I like hearing that there's so much doom and gloom around this topic. I really appreciate you kinda sharing kind of what might, what hopefully will be a path forward on that. With some of the other concerns that people have beyond just the jobs arena, if you will, and kinda going into things like misuse of AI, And I guess that there will be people on both sides of this equation, but we mentioned surveillance in passing a little while ago. Could you talk a little bit about kind of, you know, where, you know, what your thinking is around you know, we already have we've had mass surveillance for a number of years in different capacities, you know, going all the way back to the Snowden revelations where many Americans became aware of of different levels of surveillance that maybe they hadn't before.

Chris:

At this point, as we're looking at, AI enhanced surveillance and, you know, where's how that relates to civil liberties and how that relates to law enforcement and other tangential topics. It touches on so many things. Can you share a little bit of your thinking around that and what you're concerned about and what you're not concerned about maybe on AI enhancement of surveillance and the step up of what's possible?

Don:

Yeah, it's interesting, Chris. You open the whole door of abuse, or just downsides. Surveillance is clearly one of them. We already know, because of various things happening in Congress, they set us all up with delete me accounts, and you look at the number of data brokers out there that have enormous amounts of information about you and about me, and the whole notion that all this information, every time you accept cookies or anything else, you're creating an ever greater profile of who you are that can be purchased by many, many other people. Noticing that we are private people now is gonna be more and more of a fiction, and I don't think that's good for our citizenship, for our own security.

Don:

I know my wife hates it, and how we reverse it is not clear. I keep searching the Internet for people doing interesting things. Tim Bernier Scott, guy that created the World Wide Web, has a really interesting project he's working on trying to create. He calls it a pod, which would have all of Chris Benson's information in it, and you'd have to have permission to access it, or even maybe pay to get into it. And we begin to monetize our own personal information, everybody else is used to using.

Don:

When you look at people like Meta, where the whole business model is getting information about you and me and then selling. But that's one, the other, just thinking of other abuses, we already see how very sophisticated the fraudsters are right now. The amount of trillions of dollars that old people especially lose to the people who scam them, and AI makes those scams ever more sophisticated, ever harder to determine. I seem to get one or two Evites a day from good friends that I make sure never to open because I notice that I'm on a BCC, but I know that once I open it, will take an open my system to somebody who wants to try to get into Finally, well, not finally, but we talk a lot about what Anthropics debate with Tanya, with Pete Hegsatt, over the use of AI in autonomous weapons systems. Sort of from the beginning, both DeepMind and OpenAI and Anthropic have all said they did not want their AIs used for autonomous weapons systems, that there needed to be a human in the loop.

Don:

But now we're facing, first of all, administration that doesn't seem to want a human in the loop, and the reality that down the road, China, India, North South Korea North Korea, rather, could all have autonomous weapons. This is where they actively choose not to put human in the loop. So if you have a weapon system with a human and a weapon system without a human, who do you think's gonna win? So the moral and the ethical questions here are very deep and problematic.

Chris:

Yeah, I think a lot of folks, I think we grew up over the last few decades so used to a company putting products and services out, and having a terms of service that goes with that. There's a license or something, and I think a lot of folks that I've talked to didn't understand what the problem was with the government complaining that a company put out a terms of service. Every company that has products and services puts out how you can use those products and services if it's licensed in any way. The government, more specifically, the administration seemed to have quite a concern with that, and I guess why not just if it's going to be a problem for them, why not just choose another vendor that has terms of service? I think the the fight that we saw was a little bit confusing between the administration and the company because the company wasn't doing anything, at least to the the way I see it, unusual.

Chris:

It had a saying saying, we have a product of service. You can use it in these ways. Don't use it these ways. Facebook does this like, every company does the same. They have dos and don'ts with their services.

Chris:

Was this more of an ego thing, do you think, from administration? Or, you know, how why did they take the the very antagonistic perspective that they did? I I think that's been something I've been in a lot of conversations about about why did this even happen? Why not just say, okay. We can't use it that way.

Chris:

We're not gonna use it. We'll use it according to what the terms of service are.

Don:

Well, I think you've seen with with Hagset that the Fed Department, have somebody whose motto is kill, kill, kill, a very aggressive posture towards everything, very much bull in the China closet. To be told no was unacceptable to him, and so he just came right back at him. Poor understanding of how business worked, and I think there's some interesting pieces written, one by Dean Ball about, you know, Pegset's approach to anthropic threatened the entire basis of our private property system. Does that mean you don't own your company anymore, the one that you built? They seem to be slowly working it out, and of course, Defense Department turned right away to OpenAI, who was more than happy to provide them with the services that Anthropic didn't.

Chris:

Yeah. Yeah. So it it I guess

Don:

that

Chris:

lack Hexess' particular approach to people whom he don't, I guess, give into his particular, you know, actions that he wanting is

Don:

it

Chris:

it seemed very it was just very curious. It seemed to me like it was a normal business process that went off the wheels. So You

Don:

just have to all the generals and admirals that he has fired in the last year for for for crimes unknown. It's been it's been a it reminds me of what Stalin did in the nineteen thirties in Russia.

Chris:

Yep. Yep. I guess as as we look forward at what Congress might do in terms of legislation and, you know, what are what are as we've talked about some of these really big problems that are or big issues at least, that we have to navigate and find better waters down the road, you know, to navigate so that things like surveillance are not pervasive. We're not in a 1984 world that, you know, we're not losing lots of jobs. As we do that, how how can Congress impact us going forward, and how does that relate to the international component since these technologies obviously don't stop at national borders?

Chris:

You know, we've talked a little bit about Chinese and and and others. You know, how do you when you're in congress and you guys are these are big hard problems. How do you approach them? How do you how do you think about making this work in the long run as you kind of go through the bumps in the road?

Don:

Well, my thought, Chris, is it'd sort of be like standing on beach, and you watch the waves come in, and it's coming in a whole different lot of places at the exact same time, that Congress is inherently incremental. Occasionally we do big things, but I think we're likely to have a whole variety of small bills. For example, the AI Foundation Model Transparency Act, which just sets transparency requirements for the large AI models, for the five or six big guys, and just insists that they safety test them. Doesn't set the safety tests, but it makes sure that, the example we use is the FDA doesn't allow drug companies to sell drugs unless they've been tested extensively. You know, an AI could come out, roll out a new large language model, and we have no idea what it's been trained on or whether it's been tested at all.

Don:

And those are little steps along the way. Cleaning up things like watermarks, trying to make protecting intellectual property, but along those many different things hitting the beach, I'm hoping at the same time that the Department of State is talking with the folks in Europe and the folks in China and the folks in India about how we all come together to think about AI regulation, that we look to the states and local governments and say, What's coming out of Richmond or Annapolis or Topeka that is useful for all of us at the federal level? And then at some point, Chris, we also need to at least touch on existential risk because I find that beyond job displacement, which is immediate, the existential risk is something that virtually every one of my friends' constituents is concerned about. I don't want it to be real. We're not necessarily worried about the Terminator, but rather, we have such a poor understanding of where consciousness comes from.

Don:

I've read many books about consciousness. I've yet to find one that says, here's how it emerges. But we do know that it's an emergent property that I think it was Craig Muncy who told me that if you look at the human brain, unless you believe in intelligent design, no one designed the human brain, which is the most immensely wonderful, complicated machine. Instead, it evolved over millions and millions, hundreds of millions of years. Now with AI, we know it's going to design artificial superintelligence, but it may well grow out of what we've already created.

Don:

It's evolving already. And when that happens, you run into the alignment problem, right? How do we know it's gonna want what we want? I know people, very smart people, are working on trying to build alignment into the machines right now, but it's something we all need to be thinking about, worrying about.

Chris:

It is a challenge, and like you, I have read a lot on consciousness, and for listeners who don't, there are many theories of consciousness, but there is no agreement on how it how it emerges as an emergent property. There's a there's there are many dozens of possibilities, and there's also just to call out one other thing, there's the distinction between intelligence and consciousness. And we and we certainly are in the realm with these technologies that we're talking about, about intelligence arising without consciousness, certainly, where you're having an intelligent capability that is computed, that is able to do be productive in a particular job, you know, at this point in a in a superhuman context. But nobody knows consciousness yet. There is no agreement on that.

Chris:

One thing I have noticed as recently as reading something yesterday is that I see a lot of people when they're talking about this, making assumptions that people tend to put their personal bias into what consciousness is. I was reading an article last night before I went to bed about that. And so I think that's a really interesting thing that we need to navigate is what is consciousness, and and at what point will it arise, and at what point will we recognize that it has arisen, that it is that it is present? And potentially, you know, to to your point, you can have you can have it come into being without us really realizing it's there. I'm not saying that we're there today, certainly, but I'm just talking philosophically about this is definitely a big problem to navigate, and the concern of when that happens, will it recognize will it be aligned with us in terms of its best interest versus ours, and and what are its capabilities?

Chris:

How how do do you have any thoughts, Congressman, about how we start to address those kind of concerns at this point?

Don:

Yeah. Well, understanding is by far the best way. One book recommendation for for your listeners, Chris, is called Metazoa, m e t a z o a. I don't remember who wrote it. It's got a hyphenated last name, a Brit.

Don:

It's the evolution of consciousness from the first one cell animal through us today. It's a really fun, interesting science read, but you get to the end of it and realize, why is it gonna stop with us? Who says that we're the high point and the endpoint of this? And then I saw a piece yesterday, Richard Dawkins, who famously doesn't believe in God, does believe that Claude is already conscious.

Chris:

I read that as well.

Don:

Yeah.

Chris:

And and a matter of fact, I believe that the article I was referring to last night was a counterpoint to that article, actually. Okay. Where somebody was offering a criticism of it. But, yeah, it's I think the existential question, any thoughts on how people might frame that? I I think that's one of those questions where people don't even know how to approach it.

Chris:

Beyond

Don:

Yeah. And you look at the pause letter from the 700 people from, what, now two years ago?

Chris:

Yes. I remember.

Don:

The Demis Hassabis who founded DeepMind didn't sign it. Jeff Hinton who won the Nobel Prize with Hassabis didn't sign it. And largely because they didn't think that it would work, that you can't pause the entire world, or every scientist, or every thinker out there.

Chris:

And I will admit that that was my take, aside from the merits of the actual effort, where where all these luminaries signed a letter saying we should stop this kind of development, the world is is so diverse in terms of interests and personalities and politics that I thought there's just no chance that that alone is ever gonna make it. Any thoughts on how Mark we

Don:

had an interesting thing. Pope Leo had gathered a bunch of best minds back end of last year, and they came out with a short statement which got a lot of attention that said, We shouldn't build artificial intelligence, superintelligence, until, a, we know we can control it, and, b, there's actually public demand for it. But it's nice to have the statement, but it's hard to know how to make that actionable. Who's the should that's not gonna do it?

Chris:

Totally agree.

Don:

Are so hungry for the science. As human beings, we are so aspirational for something new and better. It's just who we are.

Chris:

You know, that raises as we're starting to wind up here, that raises a question I wanted to ask before we get to the end, and that is we're in a moment in history where science is kind of on the down in the public's consciousness. There's been a lot of push down on trust and stuff like that. Any thoughts on how does that have a broader impact as we talk about these AI topics? Does the fact that, certainly current administration, but there's there's a lot of folks out there that just trust in science is has been degraded, and which I personally find sad. I I think that that's not doing a service for mankind at large.

Chris:

How does that, if at all, affect AI? Any thoughts on just that general down moment in science?

Don:

That's that's a good question. I don't know I have any kind of good answer on that. I know as a member of Congress, as a Democrat, I've been very dismayed by this administration's approach to the investment in science, of slashing the university research budgets. National Science Foundation, 55% cut. They cut CDC and NIH.

Don:

They eliminated the science departments at NOAA, at EPA, cut NASAs in half. This is not an administration that believes that the scientific structure we had before was meaningful, which is very, very sad. But it is every time you read one of those articles about the scientists who made up all his or her data, destroys trust a great deal.

Chris:

It does.

Don:

But I'm looking forward to leaders, including US presidents, lift up science for its extraordinary importance in our lives. The wonderful world in which we live today only been possible because of knowledge and science. The great excitement about artificial intelligence is unfolding for us every day. Just think AlphaFold, right, with all those protein structures, our understanding of the universe to which we live. And hopefully, that has to lead to a better lives for all of us.

Chris:

Well, I can't think of a better way to wind things up than that. That's definitely an inspiring way to finish up here. Thank you so much for coming on the show today. Great insights. Really appreciated learning a little bit more today like I always do when we talk.

Chris:

And and good luck with with kind of leading the way on on the US Congress, trying to make things a little bit better for all of us on both sides of the aisle with AI and and other related things. So appreciate your service, sir.

Don:

And Chris, you end up reaching way more people than I do. So thank you for doing the Practical AI podcast and putting all this good information out week by week for so many people.

Chris:

Thank you very much.

Narrator:

Alright, that's our show for this week. If you haven't checked out our website, head to practicalai.fm and be sure to connect with us on LinkedIn, X, or Blue Sky. You'll see us posting insights related to the latest AI developments, and we would love for you to join the conversation. Thanks to our partner Prediction Guard for providing operational support for the show. Check them out at predictionguard.com.

Narrator:

Also, thanks to Breakmaster Cylinder for the beats and to you for listening. That's all for now, but you'll hear from us again next week.