Making artificial intelligence practical, productive & accessible to everyone. Practical AI is a show in which technology professionals, business people, students, enthusiasts, and expert guests engage in lively discussions about Artificial Intelligence and related topics (Machine Learning, Deep Learning, Neural Networks, GANs, MLOps, AIOps, LLMs & more).
The focus is on productive implementations and real-world scenarios that are accessible to everyone. If you want to keep up with the latest advances in AI, while keeping one foot in the real world, then this is the show for you!
Welcome to the Practical AI Podcast, where we break down the real world applications of artificial intelligence and how it's shaping the way we live, work, and create. Our goal is to help make AI technology practical, productive, and accessible to everyone. Whether you're a developer, business leader, or just curious about the tech behind the buzz, you're in the right place. Be sure to connect with us on LinkedIn, X, or Blue Sky to stay up to date with episode drops, behind the scenes content, and AI insights. You can learn more at practicalai.fm.
Narrator:Now, onto the show.
Chris:Podcast. I am your co host, Chris Benson. I am an AI and autonomy engineer at Lockheed Martin. Today with me, our guest is Ben Buchanan, who is an assistant professor at John Hopkins University School of Advanced International Studies. He was previously a White House special adviser on AI to the Biden administration.
Chris:He's been the author of four books, one of them about to come out called The Bitter Struggle, and he also authored recent article for Foreign Affairs Magazine called The AI What America Needs to Win the Innovation Race. Welcome to the show, Ben.
Ben:Thanks for having me.
Chris:Appreciate it. So I'm kinda curious if you could tell us a little bit about that's that's a fantastic set of things that you've done professionally. Often we like when people have these amazing backgrounds, we often like people to start off. Just tell us how you got to where you got and why you're why you're passionate about the topic before we dive into policy issues just to get a little personal spin on your background there.
Ben:Of course. Well, the the cool thing, at least in my view about a lot of jobs I had is that they didn't exist, before I had them. And there wasn't there wasn't a White House special adviser for AI and the like. So it's been a real adventure, but it's not the kind of thing that I've tried to plan out that I would do these. So the short answer is by accident and by luck.
Ben:But I do think I got into AI really in 2013, 2014, 2015 when I was doing my PhD in cyber operations and how nations hack one another and what that means for international affairs. And we were just transitioning as a society or as an AI community from an older paradigm to a newer machine learning paradigm then. And I kind of noticed this happening and there's a period in the PhD where everyone's kind of sick of their own subjects and they're looking for things to procrastinate with. And for me that was AI at a time when not it was not really a policy subject. And then over the last ten or fifteen years or so, it's just becoming even bigger and bigger and bigger interest.
Ben:And now of course it's, you know, salient to mainstream policy in a way I never could have imagined. But for me, was just a hobby and something that was intellectually fascinating ten or fifteen years ago.
Chris:So as you kind of arrive at where you've been, I'm kind of curious as you've gone from through that process, what would you say as you've dive farther and farther into the AI world and become an advisor to people and and written about these topics? Is there is there a particular kind of through line to your work that you would say is is kind of thematic on the topics that you tend to address and the things that are of interest to you?
Ben:Well, I think the thing about AI, which makes it so interesting from a policy perspective, and this was the through line of all the policies we worked on in the White House, was that this is really the first revolutionary technology, probably the last hundred or so years, that comes from the private sector. And if you think about the dawn of the nuclear age or the space age or radar satellites or GPS, so much else, jet aviation. All of those technologies in their early days are really coming from the Department of Defense. They're not necessarily the ones inventing it, but they are the ones funding it out of military necessity in many cases. AI is that way too if you go back to the 1960s and 1970s and 1980s when The US Government's footing the bill for everything.
Ben:But the US government basically gets out of the business by the nineties of big time funding of AI, and then we have this period called the AI winter where not a lot happens, And it is only in kind of the 2012 to modern day era that technology comes roaring back, as I said, this different paradigm, machine learning as opposed to traditional AI. And that is the kind of thing that comes from the private sector. And it's companies like Google and OpenAI and Anthropic that are the ones that drive this technology forward. And that poses a vexing challenge for the US government, because the US government doesn't have a built in knowledge the way it did for the previous technologies, because it's not involved in making them or funding them. And it doesn't have the kind of control that it used to have over those technologies and where it's going.
Ben:So there is a real kind of challenging dilemma that we'd have to confront as policymakers that's shown up in everything we did.
Chris:So how does as you talk about, you know, it largely though there were those early days with government driving, which stepped out about the time I actually stepped into it was in the mid nineties before the last AI winter. As you're as you're looking at coming into this modern era that you just discussed, you know, and and the fact that this is not like those previous technologies for the government, how has that changed the interactions between the government and the private sector and, you know, the the defense industrial base, the technology sector? Does it is is it substantially change the relationship between them?
Ben:Yeah. It does. And and I think the first thing is just a question of of policymaker education and how do we get folks in the government to understand what this technology is doing? And I had my job as the White House Special Advisor for AI, not because I knew the most about AI by anyone in the country. Many of your podcast listeners probably know more.
Ben:I think I know a fair amount, but it was it was my job to explain things and to say, how can we put this in terms that make sense to policymakers? And also, how can we view this not just as a scientific question or a technological question, as interesting as those angles are to me, but as a geopolitical question. And if you talk about explaining to someone like the national security adviser, he's not there for the science project. He's there for what does this mean for The US China relationship. So that I think was the overarching theme of it.
Ben:And then it was figuring out, well, what is the technological reality? What is coming from the private sector? And the insight we had in 2021 or so and was the basis actually of a previous foreign affairs piece I wrote in 2020 was that at the time everyone was saying data is the new oil and all of that, but it actually was computing power that was driving the bus, not data. And that, as we can talk about if you'd like, The United States and its democratic allies have a real advantage in the production of computing power, and that that was a place where we could disproportionately benefit democracies with some significant action. And that's what we did.
Ben:But I think it's a combination of being able to explain to policymakers and being able to understand what's actually going on at a technical level when the government's not inventing that technology. That's what makes a difference.
Chris:I'm curious. You know, you mentioned, you know, the that kind of old, you know, AI is the new is the new oil, you know, or data is the new oil, you know, and there's variations of that. I'm and it is it is very common out there for people to try to draw analogies between previous emerging technologies that are now, you know, robust. The Internet is the advent of the Internet being, you know, chief among those. I'm curious how you see that in terms of when you are explaining policy, you're sitting there at the juxtaposition of of the technology itself and and policy and trying to explain that and how it affects different aspects of of national defense policy.
Chris:How do you do you go back and see it as something very similar to the advent of the Internet and that it's just another technology, another normal technology, if you will? Or do you see it as as something distinct and special in its own right? There's always debates about, you know, how people are. I'm I'm kinda curious where you come down on that.
Ben:Yeah. I think there's two questions in there, and they're both really important questions. The first is how do you explain it to somebody? And my role here my role here is I don't use analogies. In almost every context, I try to resist analogies.
Ben:Sometimes I think it's Susan Sontag, the SAS and writer, had this line, to resist metaphor is to endure the thing itself. And I always just say that we have to endure AI itself. We have to confront this on its own terms not through the prism of nuclear technology or the Internet or something like that, even though those are so familiar to parts of the US government. There are entire agencies of the US government that just do nuclear policy, but this is something different. So I wasn't always successful in that.
Ben:Sometimes you have to use analogies and they can have value. But my philosophy was always let's enjoy the thing itself. Let's confront the thing itself. And then there's a question of, okay, but on the merits, how does this stack up to previous technological revolutions? And my view here is that this is going to be an extraordinary technology, maybe one of the most significant technologies.
Ben:And, you very kindly mentioned my upcoming book, The Better Struggle, the previous book on AI was called The New Fire. And that was the kind of metaphor, maybe you know against my own advice, putting analogy in the title, that was the kind of metaphor that I was reaching for in trying to describe this technology, in a really broad range of outcomes, something foundational to humanity. And everything I've seen in the in the period since I wrote that book and in the period since I've left the White House has only confirmed, I think, that we are on that trajectory for better or for worse as a species.
Chris:You know, another thing as a as a follow-up to the same point is, you know, when you you kind of mentioned that compute was really central rather than data to that. I'm curious how, you know, we're so used to hearing about data being the new oil, driving AI, but you mentioned compute. How should listeners maybe think about compute maybe differently from the way they were? You know, is there is there trying to frame things because that's constantly evolving with people because the technology is moving so fast. Could you talk a little bit about how compute is relative to data in the way that you are positioning?
Ben:Yeah. One of the most important papers in the history of AI is a paper that came from OpenAI in, I think, January 2020. And it's called the paper it's the scaling everyone's called it the scaling laws paper. Scaling laws for neural networks is the title. And this is a really important insight, which is the more computing power you use to train an AI system, the more powerful the resulting AI system.
Ben:Now it does scale with data. So we're not saying data doesn't matter at all, But relatively speaking, the limiting factor tends to be computing power. And that is a very important insight because that then shifts AI from this ephemeral thing of algorithms and data and stuff that who knows where it lives something that is physical and practical, and it's computer chips. And in fact, huge, huge numbers of computer chips and increasingly the power, the electricity to run those computer chips. And that physicality of AI creates geopolitical opportunity.
Ben:Now making a computer chip in my view is the hardest thing we do as a species. We can talk about this process if you'd like, but you, you know, you and your listeners probably know something like 97% of the advanced computer chips in the world are made in Taiwan by a company called TSMC using incredibly advanced machines from a company in The Netherlands called ASML, as well as from companies in The United States and Japan that also make these chip making machines. Well, I just mentioned a bunch of democracies, and it is very fortunate for democracies that maybe as a historical accident, maybe as a credit to our innovation culture, democracies own the computing supply chain. And that created an opportunity in our view to say, here's this thing that's incredibly important. That's very physical.
Ben:It's very hard to make. We can control this. And we can stop a nation like China from taking these computer chips and the AI systems they create and modernizing their military, repressing their population, building their surveillance state. And that was what the Biden administration did for our four years. And we took a lot of conversation about it in '21 and then put the action into place in '22 and then tightened further in '23 and '24.
Ben:And that was really born of a desire to make sure, especially when it comes to military competition, The United States has a United States has democratic allies and partners have an advantage, that AI can give them.
Chris:I'd like to I'd like to follow-up on that as well. When when we talk about Taiwan, I know in my own personal experience, if you get outside of kind of government centric private sector, which is, you know, where I work for a living and the academic and government sector like you, but you just talk to everyday people. They're aware often of competition between the West and China, And they're sort of aware of Taiwan, seems, but they don't always understand it. I'm wondering if, you know, you you were really hitting at the heart of of, you know, some of the strategic concerns around that. If you could talk a little bit about why people in AI, or even outside of AI, should care about that kind of geopolitical concern.
Chris:A lot of people go, it's politics or something, I don't care about that. Could you talk a little bit about what does it mean if we Recognizing everything that you just said in terms of policy that you're implementing, why is I guess, for somebody who's not familiar with that?
Ben:Well, the first is let's start with the importance of computing power. And as I said, computing power is what drives AI progress. That's the scaling law. It drives a huge portion of our economy even beyond AI. So think for example about the semiconductor shortage from a few years ago and how that was delaying cars and dishwashers and and so much else.
Ben:So I think it is it is fundamental. The role of computing power is fundamental in the modern economy. I think if if you, for some reason, didn't have chips coming out of Taiwan, some of the analysis I've seen publicly available analysis has suggested this would be trillions of dollars in losses to world GDP. So there is the centrality of computing power to the modern economy. And then there's a centrality of Taiwan to making that computing power.
Ben:And it is an incredibly intricate process to make semiconductors, and especially advanced AI semiconductors. And really only TSMC has mastered that process. TSMC is the Taiwan Semiconductor Manufacturing Corporation I was mentioning before, and they're the company that has mastered this using a supply chain from The United States, Netherlands, and Japan. So again, something like 97% of the chips in the world, the advanced chips come from Taiwan. We did try to change this in the Biden administration, and we said there's actually a strategic national security weakness.
Ben:And on a bipartisan basis, the congress passed the Chips and Science Act to bring chip making here to The United States, which has begun in Arizona. But Taiwan is still way, way ahead. So they're just a fundamental importance in in the computing supply chain and therefore in national security and the global economy.
Chris:So so I appreciate that. That kind of helps frame a little bit about, you know, the the whys of why Taiwan is a political concern coming from somebody who understands both sides of the of the divide on that.
Ben:I I should I should say just for completeness, there are other reasons to care about Taiwan. You know, I'm someone who believes they're deeply in democracy, and I'm not saying we only care about Taiwan because they make really cool chips. But this if you are just a pure realpolitik person, this is one reason. The chip production is one reason to care about Taiwan in addition to the other more moral ones. So
Chris:Ben, I'm curious as a professor and also a government adviser, how does your thinking and your communications change with those two audiences in terms of your your as you are explaining the different aspects of how technology and policy interrelate, how do your different audiences choose to care, and what is the messaging that you have for them in terms of that? I I you know, like, if you're the White House versus you're in a classroom at Georgetown.
Ben:Well, if you're asking, do my graduate students know more than congress does? The answer is yes. I won't comment on the White House comparison, but they definitely know more than congress does. I think I think there's a difference between teaching in the classroom and engaging in policymakers, whether in Congress or in the White House. Policymakers usually don't want the theory.
Ben:They just want to know what's happening now. And policymakers usually are a lot busier than my graduate students who have to put up with me for a two and a half hour seminar every single week. I never I have sat with President Biden for an hour and ten minutes, an hour and twenty minutes maybe, but I've never sat with him for two and a half hours to go through something. So there is there's definitely a difference in terms of how much space you get with with graduate students. But I do think in many respects they're asking the same questions, which is where is the technology going?
Ben:What does it mean for humanity? What does it mean for democracy? And what should we do? Now I think the president has president Biden had much more ability to action what should we do than than my graduate students. But I do think they're asking the questions that really all of us should be asking about this technology, its pace of progress, what it's gonna mean for us, and then ultimately, is the policy response for this technology that's coming from the private sector?
Chris:How and I'll I'm definitely not trying to to trap you into the politics of it, but one of the things that we have observed and we've talked about on our show over time is through different administrations, there have been different AI policies put into effect. And the, you know, the the current one, you know, came after the one that you served, has a different collection. Do you have any thoughts on kind of the how the progress of AI policy has developed across administrations? Are we on the right track? What are we missing?
Chris:What are we on target for? And kind of talk about the policy aside from the the the current, you know, political aims of any given administration.
Ben:Well, one of the things I like about AI is that AI is not a partisan issue. And Yeah. It it has not be I mean, it's it's it's getting there, but it but it was not, at the time I was in the government, particularly polarized. And we had very good conversations with Republicans. The the president, the day after he signed his executive order, hosted a bipartisan group of senators in the Oval Office.
Ben:It was a great conversation. It was actually like a little bit of a a lesson for me in that conversations are not always that good. And in fact, just because you have a good conversation in the Oval Office does not mean Congress is actually going to do anything. But there are good bipartisan people, Senator Todd Young of Indiana, Senator Mike Rounds of South Dakota, who I think are, you know, Republicans and I'm sure don't agree with Joe Biden on a lot of things, but could engage on this issue. So I don't see us as a as a partisan thing even now.
Ben:Another reason why I don't see us a partisan thing is the first Trump administration on some of the national security questions, at least philosophically, was in the same camp we were. And it was the first Trump administration, and they get a lot of credit for this. Matt Ponder was Trump's deputy national security adviser. He gets a lot of credit for this. Leaned on the Dutch in 2018 to make sure they didn't sell advanced chip making equipment to the Chinese.
Ben:And that was, I think, a really good decision. Now we went much further, and we did countrywide bans, and we banned the chips itself. And and there's, like, 15 things I think that we did that went further. But I don't think those are particularly partisan things. And and in fact, lot of Republicans agreed with those things when when we were in office.
Ben:We have seen a reversal. And in some sense, I think that the delta between Trump two and Trump one is much bigger than the delta between Trump one and Joe Biden's administration on these issues. And I think there's a vibe right now of let's sell the chips to other countries, including China. President Trump has said he's willing to sell advanced AI chips to China. Of course, we were not willing to do that.
Ben:And J. D. Vance, when he gave a speech on AI, said I'm not here to talk about AI safety. Here to talk about AI opportunity. Kind of downplaying the risks of this technology.
Ben:My successor, someone named David Sachs, he has famously said that the Trump administration's policy on AI is let the private sector cook. Again, back to this theme of the private sector inventing it. So I do think there's there's differences now that you can imagine are not where I would land the policy. But it is not the case that this is a capital p partisan issue as even the first Trump administration chose.
Chris:What as as you as you're looking at how this relationship is, what would you like to see in the developing relationship between government and the private sector as it's meandered across these administrations and and to your point in the last in that last answer, what what would be an ideal to go for between public and private sector? There's always that tension about you know, we've seen in the in the in the news as we're recording this some of the some of the concerns between the the department of defense or department of war, depending on how you're how you're choosing to label it, and and, you know, a particular organization that that doesn't want to put certain models into into combat scenarios. And so which is anthropic. I guess there's no reason not to name them outright. Like, how do you see that relationship developing over time?
Chris:What would be a healthy way for it to develop that, you know, brings the larger the larger good into into picture? And are there anything that jumps out as we really don't wanna do that? I know you just named selling maybe to potentially hostile countries and such, but I'm I'm just curious how you see that relationship developing.
Ben:Well, I think it it depends if we're talking about the relationship between the government and AI companies or the government and chip companies. So when it comes to the chip companies, companies like NVIDIA and the like, our posture was we thought their technology was so important, so fundamental, and also so scarce that we did not want it going to countries like China because of the ways in which it would modernize the Chinese military and the like. And because China really struggles, in part because of the controls on chip manufacturing equipment, China really struggles to produce similarly powerful technology. China will not make a chip as powerful as the one Trump has agreed to sell them until 2028, like according to China's own road maps. It's like, if you believe the Chinese propaganda, they won't get there until 2020 So Right.
Ben:They're they're just as, as I said before, an extraordinary advantage. And I would we we took put a policy that I would will defend of export controls and say, we we want these this technology, especially given how scarce it is, every chip that gets made will get sold. We want this technology to go to, democracies and then ideally American companies. So that I think is the first aspect of the relationship. Then the question is, well, what's the relationship between the government and the AI companies themselves, the ones that are developing the systems?
Ben:I'll leave the news reports aside. Obviously, it's timely subject, but I'll tell you, we thought about this a lot in our administration, and we the president signed a document called the National Security Memorandum. Basically, it's an executive order for the Department of Defense and the intelligence community. It's got unclassified and classified component to it. And then classified component is pretty straightforward, which is it directs these government departments and agencies to work with the private sector and to say, let's bring this technology in to the private sector.
Ben:A lot of the ways in which we did things in the past is outmoded or broken, and we need something that is newer and more capable and able to keep up with with changing times and changing threats. And I'm very proud. The president signed that in 2024, and I'm very proud of what we did in in the time developing that and then before we left office to start that ball rolling, to say there's a way we can work very collaboratively. That does not mean it's it's no holds barred. And president Biden said this is a technology that poses significant risks, and we need to have guardrails in place to make sure this technology is not misused and the like.
Ben:And we were very alert to try to craft those guardrails in a way that gave the department the flexibility they needed to fight and win wars, but also made it made us worthy of the values we were defending. And that is something that the Department of Defense and the intelligence community worked with us and was were totally on board with, of saying we were gonna use this technology in a way that's consistent with our values.
Chris:As we talk about guardrails and values, and that's a that's a huge topic right there, especially when you talk about specific applications of AI. The reason I brought up the anthropic thing that was the current affair thing before is that it is typical of a concern that people across the political spectrum have in terms of the appropriateness of putting AI into specific cases such as combat and stuff like that? Is there do you what is your guidance? Like, what what is your own personal feelings around the right place to draw a balance in that? You know, what's the responsible place to land in terms of how you match AI up with security critical or safety critical, you know, applications of it?
Chris:Do you have any any guidance on what what where you think things should land?
Ben:I don't claim a lot of expertise on on the particular service. I think I and, again, not commenting on the anthropic case in particular. I do think in our administration, we recognize that it varied by use case. So there are some use cases, I don't know if you call cyber operations combat, where autonomy is really important. Missile defense mean, is about the military.
Ben:I would too, but, you know, I can introduce you to some infantry men who wouldn't, men and women who wouldn't. Missile defense is another area where autonomy has historically been very important. And there probably are some great Lockheed systems that have an autonomous mode from the 1990s on this kind stuff. So I think it depends on the area. The DOD and our time, and I had nothing to do with this, but they revised a policy called 3,000.09, which talked about appropriate levels of human judgment in this.
Ben:And I think that policy is
Chris:Very familiar with that personally.
Ben:You probably know more about than I do, but that was something that important to us. One thing that I was a little closer to that I think was important is to say this is not just The United States deciding this on its own. We need to get a group of nations, ideally a group of democracies, but also broader than that, to work on this problem together. And the answer probably is not we will not have any autonomy in military systems. No one's saying one's saying that.
Ben:But we developed a document called the Political Declaration on the Use of Autonomy in Military Systems, I think. And we got something like 58 countries to agree to that and the set of principles that would guide that work. So I think that is vitally important as well. Wherever we decide as a nation to draw the lines, it's vitally important that we go and try to set that as the in a collaborative way, the norms and standards with the rest of the world and and then minimum the rest of the democratic world. And that's something I think we're we're very proud of having gotten the ball rolling on.
Chris:I think is is kinda one more question along the line of of guardrails in general and kinda where to land. There's a lot of debate in the in you know, out there in general with people that care about policy and how AI relates. In terms of what I'll what I'll call kind of speed versus caution in terms of how you set guardrails that are appropriate and stuff. And and interestingly, I'll I'll make the comment so that I don't put it in your mouth. I've seen the current administration go both directions on that at different times, which is somewhat confusing in terms of how to interpret.
Chris:But but in general, like, as you're looking at the the the private sector racing forward in terms of development of models and capabilities, as you would expect from the technology sector. And then you're dealing with with certain areas that are safety critical and you're you're worried about and it's not necessarily just military topics, but but other areas, there's obviously transportation, and there's even psychological impact. A hot topic these days, I have a daughter who's about to go into high school and the impact of AI playing into systems that are social media and things like that. So there's a wide array of concerns to address this. How can people think about that kind of speed versus caution paradigm in all of these different aspects that impact their lives and or which they are watching politicians in Washington DC engaged in?
Chris:Do you have any thoughts around how do you as you're looking at such this massive array of possibilities, how do you assess those? Do you have a framework in your own mind about how you would say this is a good place to go, this is not such a good place to go?
Ben:I I mentioned at the top of our conversation that in some respects, this is the first private sector invented revolutionary technology in the last hundred years. I think the last one was the railroad. If you go back to the late eighteen hundreds, And the railroad had many of the characteristics that AI has. Private sector invented, huge capital expenditures. This this if you go back and read some of the literature, this kind of promise that it would transform the economy, and they even said it would transform the climate, and would make everything wonderful.
Ben:This is similar kind of utopian vibes that you get from sometimes these Silicon Valley companies. And the early days of the railroad were incredibly bloody. And there there was train derailments, train deaths. There was no sterilization of anything. So there was no time zones.
Ben:There was no brakes, no air brakes on a lot of the trains. There was not even standardized track gauge width. So you got all sorts of derailments. There was poor coupling between cars, so the trains were coming apart. It's thousands of deaths, in the early years of the railroad.
Ben:And it eventually was the case in a very halting, imperfect process. Some combination of the government and private sector worked this out. We got time zones from from the railroad companies. We got standardized track with gauges from the government. Air brakes, coupling between cars, railway safety act, all of this stuff kind of emerges over several decades.
Ben:The net result of this is the trains are safer, but also the trains go faster. And the reason I kind of give this historical analogy is to suggest that, you know, too often I think speed and safety are put into tension. And again, look at J. D. Vance's comments, really, I'm not here to talk about AI safety.
Ben:I'm here to talk about AI opportunity. Well, my view is that we get AI opportunity through AI safety and through not, you know, incredibly cumbersome regulations and the like, but through developing technology that is safe, secure, and trustworthy and people can trust. And that is still my general principle. And it shows up in all kinds of different ways. We've worked on domestic things and kids online safety, and the principle I think applies across the board.
Ben:But at a big picture, that is the the most important principle, I think. The second important principle that I thought about a lot is competition is good between companies. Competition is sometimes even good between nations. But you can have a competition that becomes a race to the bottom. And I was worried, and I think a lot of people were worried, that if you had The United States and China in essentially an arms race or perceived to be in an arms race for this technology, that would create incentives for both sides to cut corners on safety and to to race ahead in ways that might be foolhardy.
Ben:So part of our thinking was we want to build as large of a democratic lead as possible for American companies to essentially make this a a democratic problem. And to then say, well, we'll coordinate. We'll we'll figure out whatever coordination and regulation is necessary within our own borders, but this is not a thing where you have democracies and autocracies racing to integrate into the military. So developing a big lead so that you can spend it on safety, I think, or and and more generally spend it on safety and trust, I should say. That was a big a big part of our philosophy.
Ben:And again, think that that shows up in the policies.
Chris:So Ben, as we went into break kind of talking about how different geopolitical concerns affect how people are perceiving a potential like arms raised in AI or whatever. And that begs a larger question, especially in today's climate is kind of the international order has been definitely being, I'll gently say, transformed in recent times. And so many of the relationships that for many, many decades, eighty ish years that we've been relying on to coordinate are shifting. As we are like, even if that hadn't happened, you're having AI technologies and chip concerns that aren't just affecting you know, we've had somewhat of a of a US centric conversation so far, but it has you you have to get allies working on on different policies and and try to align those to where we have that kind of guideline safety net that you were talking about just before the break. And so is it getting is it getting harder to do that in today's geopolitical?
Chris:If you were if you were able to influence current policymakers in this administration, how would you guide them into a safer world with AI where we're able to get many parties internationally to kind of agree on what those safety and guidelines should be? What what is the best way to do it? Seems like it's harder with some of those relationships faltering at this point.
Ben:I think that's absolutely correct. And we knew the international dimension. This was really important. And this was something that our company said to us all along. So this is not a, like, bleeding heart liberal thing.
Ben:This is American companies saying this goes better for us if the if there's one clear set of standards and if there's interoperability between regulatory structures and the like. So this was something that was a priority for us in every aspect. You can see it in the president's executive order on AI. You can see it in the national screen memorandum I mentioned. We had something called the Hiroshima process that the g seven group of nations did.
Ben:We had the UN resolution that was unanimously passed, including the Chinese co sponsored the resolution. So obviously each of these documents was different, but our view was we had to show up as a country in every international setting to make the case for what our vision of AI looked like and then to hear from others and and to work it out. Now it's international diplomacy, and there's a great team at the State Department that deserves a lot of credit for the work that they did. All of that is harder now because the relationships are frayed for other other reasons, tariffs, politics, and the like, I think ultimately that will hurt us, and that will that will hurt us and will hurt our businesses, and and the proof is gonna be in the pudding on that. So, yeah, I think I think I think the harm is is real.
Ben:One thing I I also wanna say is this is not just going to be a question of democracies. And I believe very firmly that democracies need to have preeminence in AI. I recall a quote from Kennedy. He's given a speech at Rice University in 1962. Remembers his speech because he says that's when we're going to go to the moon and come back.
Ben:But he talks later on in the speech about space. And he says, we don't know if space is going to be good or ill for humanity, but only if we are first can we help decide. President Biden had a copy of the speech in his private study, also at the Oval Office, and I always thought it was just a great metaphor for this. So I'm all for democratic preeminence in AI, But I also think we need to talk to autocracies. And we had quiet conversations with the Chinese in Geneva.
Ben:I mentioned the UN resolution. And I do think there's an aspect of this that is being able to engage with all nations about technology that affects all of us. And I'm worried we're also losing the ability to do that just as we're losing the ability to talk to our friends about it.
Chris:With with you know, as we've kinda talked about, like, where the West is and our capabilities a a few minutes ago, talking about chips specifically versus Chinese capabilities and where they're anticipating chips coming up. One of the things that we've seen particularly over the past year or so is kind of a surge of scientific papers released by Chinese researchers. If you go to Hugging Face, there are a lot of Chinese open models that are there at this point. They are if you're just doing a numbers count, they are definitely catching up and and and kind of passing that way of measurement. Like, as you were at this point of at least inflection at some level, and I'm not sure that I know what the inflection is, that it's a fairly complicated thing to discuss in its intricacies, but what might this mean for the relationships between ourselves and China in this case in terms of maybe steering away from, you know, an antagonistic relationship in the future, finding our way back to that kind of Nixon opening up China moment potentially in the future, whether it could be this administration or whether it needs to be a future administration that does that.
Chris:How does this surge of open models matter in the sense of should people be using them? Should they stay away from them? How should they think about them? I get a surprising number of questions about that topic. It's like, do I wanna use a Chinese model?
Chris:Am I putting myself at risk? But there's the there's kind of the several questions you might say packed into one. There's kind of the how do individuals assess those and how does it affect the relationships of these national bodies in terms of how they are developing the relationship in the future. I definitely am hoping that we don't get get into a a worse situation in the Taiwanese Strait. It'd be nice to to find an opportunity to to back away from some of the risks developing there on an ongoing basis.
Ben:Sure. Well, we can start there. I obviously would like to no one wants a warrant in Taiwan. Mean, think it's a very important piece of the global puzzle, but I'm with you on that. I'm, you know, now that I've left the government, I advise American AI and cybersecurity companies.
Ben:So obviously I have a preference for American systems, and I think some part of me still has the old national security policymaker in me too that, of course, has a preference for American systems. But I think the Chinese developers are very talented. And if you look at the team at DeepSeek and look at what they have done, they are very talented algorithmic engineers. I was a big fan of we worked very hard in the AI executive order on high skill immigration and bringing AI scientists to The United States. I'd love to have all of them move to The United States and start their companies here.
Ben:So, I have no ill will towards them or the Chinese people, generally. That said, I do think, sometimes deep seeker companies like that can be presented as rebutting the thesis I outlined earlier of the centrality of computing power or the dominance of The United States and democracies in computing power. And I don't think it does that. And in fact, if you look at the deep sea systems, they're all trained on either smuggled or stockpiled American chips. And they're constrained in their performance.
Ben:They, in many respects, lag US companies because of their inability to get US chips. And if you look at the DeepSeek v 3.2 paper that comes that came out in December '25, for example, they acknowledge this in the paper. They say, we, you know, essentially we are constrained by a lack of computing power. And the deep sea CEO, when I was in office in 2024 said, my issue is not talent and it's not money. It's computing power.
Ben:And he's right. He's absolutely right. So I think several things are true here where the deep sea people can be very talented, which they are. And also it could be the case that computing power remains incredibly important. In fact, probably the most important US advantage.
Ben:So none of this changes in my mind on the policy side of it and what we did or or didn't do. Again, I think we should go further than we did, strongly further than president Trump has gone. But it is a reminder, I think, of the Chinese ability of the space. And it makes me wonder what they would do if they could get even more of our chips. And the answer is they would they would be really good.
Chris:I'm curious. As as they continue to work toward developing this, and you've talked about your expertise in cyber and maybe you're kind of starting your career on the cyber side moving into AI. Can you talk a little bit about how you expect AI to create asymmetries in cyber and the various aspects of cyber? Talk a little bit about what the junction of AI and cyber looks like and and how you expect that to evolve. You know, how how is it influenced?
Chris:You know, we we hear about cyber attacks pretty much every day. Everyone does in the news, and that's don't think anyone expects that to stop. So going back to our agreement that it is certainly part of war fighting, it's also part of daily life that every listener here has has to consider. So can you give us a little bit of a a level set on how cyber and AI interact in that way?
Ben:I think cyber operations is one of the most immediate ways in which AI will impact national security. And again, it's part of how I got into AI because of this this connection with cyber. And I agree cyber is part of warfighting for sure. But as you said in your question, it is also part of day to day life, not just for ordinary people, but for nations. You know, the famed US strategist George Kennan has this quote.
Ben:I think he talks about the perpetual rhythm of struggle in and out of war between nations. No place is that more real than cyber operations. You know, The United States conducts offensive cyber operations, defensive cyber operations, not just in wartime. It's a key part of intelligence collection. Every nation does this.
Ben:So an advantage in cyber operations and offense and defense translates in a very immediate way to an advantage in national security, which is why it's so important. Now I think AI will have a significant effect both on offense and defense and cyber operations. There's a number of different angles to this. Let's take the most obvious one, which is vulnerability discovery. Vulnerabilities are weaknesses in computer code.
Ben:If you can find them on the defensive side of the ball, you can patch them and reduce them. If you find them on the offensive side of the ball, you can exploit them and use them as a tool for intelligence advantage and the like. The question for decades has been, can AI find software vulnerabilities? DARPA ran a thing called the Cyber Grand Challenge in 2016, tried to begin answering this question. We did another version of it called the AI Cyber Challenge in 2024 and 2025 that went much deeper.
Ben:And now I think the world is providing the answer. Anthropic, which is the company I advise, but I do not speak for, published with their recent model release, published that it found something like 500 high severity vulnerabilities in open source software. So this is some of these have been in the code, think, for years or decades. So this is a tangible proof, not in a theoretical kind of way, but in a very practical real world way that AI is changing cyber operations. And I do think there's a lot of opportunities for nations to use it to their advantage, and I hope The United States is continuing the effort we put in place to try to get there first.
Chris:I appreciate that. Great way of framing that. As we wind up and kind of as a last question, we often we often ask what we call the future question to finish up, and where your thinking is at. I think I want to craft it a little bit on this in terms of let's take the current politics and and the challenges of that out of it a little bit and just talk about democracies in general within AI. If we're looking at what the next few years might look like, and let's say that some of the angels on our shoulders prevail, and maybe some of the scary things that we worry about don't occur so that we kinda get ahold of Maybe rational minds are take hold for the future.
Chris:How do you measure democracies kind of like, are we winning as an as a simple way of putting it? Democracy and AI, we've talked a little bit of we talked quite a lot about that in the conversation. How do you look forward and say, Yes, democracy's holding true. We have a competitive advantage in this particular way or that particular way, and we can measure it that AI development is better for that. I I I know that's a little bit of an oblique way of putting it, but I'm trying not to put you into a a republican versus democrat future kinda thing and more just democracy going forward.
Chris:How does that look to you when you when you're thinking about healthy democracies moving the ball forward?
Ben:I appreciate it. Again, I don't think it's a republic or democratic thing. I think this question of what is success for democracy in the age of AI look like comes down to three things. The first is, are we inventing the technology? And are we the ones for the species pushing this technology forward, bending it as best we can towards safety and justice and the like, but making this technology ourselves?
Ben:And that's part of the reason why things like chip controls are so important. The second is, we adopting this technology? Are we adopting it to a national security apparatus? Are we adopting to things like cyber operations? Are we adopting in our economies, in our businesses in a way that propels our economy and our prosperity?
Ben:And, of course, propels our security advances our security as well. So that's the second question. Those two questions, I actually feel pretty good about. I feel best about the invention one. I think I think we have all the winning cards on invention.
Ben:The only way America loses is if it folds and and does things like sell chips to China and the like. But I think we have the winning cards on invention. Adoption is doable. It won't be easy, but it is doable. Savvy policymaking can can can do that.
Ben:And then the third set of questions is the harvest, which is are we using this technology in accordance with our values? We talked a little bit about lethal autonomous weapons. That's one dimension to it. Another dimension to it is are we using this technology domestically in a way that guards against some of the risks that it could pose? Are we going to do about this technology and its impact on jobs?
Ben:I'm not I'm a national security person, not a labor economist, but that's really important, I think, in a democracy. What about the ways in which it can centralize power in the hands of a couple companies or in the hands of a government, a surveillance state? What about the degree to which it might undermine the social contract and the underst you know, citizens' ability to participate in democracy? You mentioned disinformation. You mentioned kids' online safety.
Ben:There's many, many, many different aspects of the how do we make sure AI is advancing rather than undermining democratic values question. I don't have great answers to all of them. Think I've got answers to some. But those are the three things I would use if we're sitting here in five years or ten years and we're evaluating whoever's been in office. It's America inventing AI or democracies inventing AI systems.
Ben:Are they applying them better? And are we doing so a way that's consistent with our values?
Chris:Alright. That was fantastic. Thank you very much. My Shannon. Really, really appreciate having you on the show.
Chris:A lot of great insights there. Thanks for sharing them with our audience and and look forward to having you back on the show at some point in the future to talk about some of the extension of what you've been working on at that time. You're always invited back. Looking forward to talking to you again.
Ben:Thank you so much.
Narrator:All right. That's our show for this week. If you haven't checked out our website, head to PracticalAI. Fm and be sure to connect with us on LinkedIn, X, or Blue Sky. You'll see us posting insights related to the latest AI developments, and we would love for you to join the conversation.
Narrator:Thanks to our partner Prediction Guard for providing operational support for the show. Check them out at predictionguard.com. Also, thanks to Breakmaster Cylinder for the Beats and to you for listening. That's all for now, but you'll hear from us again next week.