AI News Podcast | Latest AI News, Analysis & Events | Daily Inference

The AI industry is facing its most consequential week yet, and the fallout is just beginning. Anthropic was effectively blacklisted by the Pentagon for refusing to let its Claude AI be used for autonomous weapons and mass surveillance β€” only for OpenAI to swoop in and cut a deal with the Defense Department within hours. Sam Altman has since admitted the move looked 'opportunistic and sloppy,' and critics are pointing out that OpenAI's amended terms look suspiciously similar to the very lines Anthropic refused to cross. Even more alarming: reports confirm Claude was already used to shorten military kill chains in operations targeting Iran, raising urgent questions about AI accelerating warfare faster than humans can oversee it. ChatGPT uninstalls surged nearly 300%, a grassroots campaign claims over a million cancelled subscriptions, and Claude briefly dethroned ChatGPT on the App Store β€” before crashing under the weight of new users. Meanwhile, Google's latest Pixel update lets Gemini take real actions inside apps like Uber and Grubhub, and a new research system called MEM is giving robots up to 15 minutes of memory to complete complex, multi-step tasks. AI coding tool Cursor crossed $2 billion in annualized revenue, and Cambridge researchers unveiled a tool that translates neural networks into human-readable math equations β€” a potential breakthrough for AI transparency. And in a move with massive creative economy implications, the Supreme Court declined to rule on whether AI-generated art can be copyrighted, leaving the question dangerously unresolved.

Subscribe to Daily Inference: dailyinference.com
Love AI? Check out our other AI tools: 60sec.site and Artificial Intelligence Radio

What is AI News Podcast | Latest AI News, Analysis & Events | Daily Inference?

Your Daily Dose of Artificial Intelligence

🧠 From breakthroughs in machine learning to the latest AI tools transforming our world, AI Daily gives you quick, insightful updatesβ€”every single day. Whether you're a founder, developer, or just AI-curious, we break down the news and trends you actually need to know.

Welcome to Daily Inference, your daily dose of the most important stories shaping the world of artificial intelligence. I'm your host, and today we have an absolutely packed episode covering some of the most consequential AI news we've seen in a while. From battlefields to boardrooms, AI is at the center of some serious debates right now. Let's get into it.

But first, a quick word from our sponsor, 60sec.site. If you've ever wanted to build a professional website without the headache, 60sec.site uses AI to get you online in, you guessed it, about sixty seconds. Whether you're launching a business, a portfolio, or a side project, check them out at 60sec.site.

Alright, let's start with the story that has dominated the AI conversation this week β€” and it's a big one. The relationship between AI companies and the US military just got a whole lot more complicated, and the ripple effects are being felt everywhere.

Here's what happened. Anthropic, the maker of the Claude AI assistant, was essentially blacklisted by the Pentagon after refusing to allow its technology to be used for mass surveillance or fully autonomous weapons systems β€” meaning AI that could select and kill targets without a human making the final call. Defense Secretary Pete Hegseth reportedly dismissed those safety provisions as, quote, woke. Within hours of Anthropic being pushed out, OpenAI swooped in and struck a deal to supply AI to classified military networks.

Now, Sam Altman later admitted the whole situation looked, and I'm quoting him here, opportunistic and sloppy. He's since amended the deal, saying OpenAI will also bar its tech from mass surveillance and intelligence agencies like the NSA. But critics aren't buying it β€” MIT Technology Review noted that OpenAI's so-called compromise essentially mirrors the exact lines Anthropic was willing to draw, raising the question: did OpenAI actually offer anything different, or did they just find more politically palatable language to say the same thing?

And here's where it gets really alarming. Reports emerged that Anthropic's Claude had actually already been used by the US military in operations related to strikes on Iran β€” essentially shortening what military planners call the kill chain, the process from identifying a target to executing a strike. Experts are warning this represents a genuinely dangerous new threshold, one where AI is accelerating military decision-making faster than humans can meaningfully oversee it.

The public reaction has been swift and striking, no pun intended. ChatGPT uninstalls surged by nearly three hundred percent after the DoD deal news broke. Claude shot to the number one spot on the Apple App Store in the United States, actually dethroning ChatGPT for a day. And tech workers signed an open letter urging the Defense Department to drop its designation of Anthropic as a so-called supply chain risk. Meanwhile, Anthropic rolled out new features to capitalize on its moment β€” upgrading Claude's memory capabilities and making them available to free users, and even adding a tool to help people import their data from ChatGPT or Gemini if they want to make the switch. Though irony of ironies, Claude experienced a widespread outage on Monday β€” apparently the surge in new users was a bit much to handle.

All of this is unfolding against a backdrop of deepening public distrust. A grassroots campaign called QuitGPT has reportedly gathered over a million people who've cancelled their ChatGPT subscriptions, with celebrities like Mark Ruffalo and Katy Perry lending their voices to the movement. And historian Rutger Bregman published a compelling op-ed arguing that OpenAI is already on track to lose fourteen billion dollars this year, with its market share eroding β€” suggesting that consumer pressure might actually be more powerful than people think.

The core tension here is one we'll be wrestling with for years: can you be a safety-conscious AI company and also a defense contractor? Anthropic said no and paid a political price. OpenAI said yes and paid a reputational one. Neither answer looks clean.

Moving on β€” Google has been busy this week on two very different fronts. On the consumer side, the March Pixel drop for the Pixel 10 series brings a genuinely significant new capability: Gemini can now take actions on your behalf inside apps like Uber and Grubhub. Ask it to order groceries or book a ride, and it'll handle it while you do something else. You can supervise or stop it at any time, but this is a meaningful step toward AI that doesn't just answer questions β€” it actually does things for you in the real world.

On the developer side, Google quietly dropped Gemini 3.1 Flash-Lite, the most cost-efficient model in its Gemini 3 lineup. It's designed for high-volume, production-scale applications where keeping costs per token low and latency minimal are the primary goals. It features adjustable thinking levels, essentially letting developers dial up or down how much computational effort the model applies to a given task. It's currently in public preview via Google AI Studio and Vertex AI. This is Google continuing to build out a full spectrum of models β€” from flagship reasoning powerhouses down to lean, fast options for tasks that don't need maximum intelligence.

And speaking of the intelligence-at-scale race, AI coding tool Cursor has reportedly crossed two billion dollars in annualized revenue, with its run rate doubling in just the last three months. For a four-year-old startup, that's a staggering number, and it signals just how fast enterprise adoption of AI development tools is accelerating. Meanwhile, Anthropic added voice mode to Claude Code, its coding-focused product, making it easier for developers to have a natural conversation while working through complex programming problems.

Now for something on the research frontier that doesn't get enough attention: a team from Physical Intelligence, Stanford, UC Berkeley, and MIT just unveiled a system called MEM β€” a multi-scale memory architecture for robots. Here's the problem they're solving. Most robotic AI systems today can only see what's directly in front of them, or at best remember the last few seconds of activity. That makes complex, multi-step tasks β€” like cleaning a kitchen from start to finish or following a long recipe β€” basically impossible without constant human guidance. MEM gives robots up to fifteen minutes of usable context, organized across multiple time scales, allowing them to track what they've already done and plan what comes next. It works with Google's Gemma 3 four-billion-parameter model as the underlying vision-language-action system. This is a genuinely meaningful step toward robots that can operate autonomously over realistic task timescales.

On the transparency front, researchers at Cambridge University released a tool called SymTorch β€” a library that takes the black box of a trained neural network and attempts to express what it learned as actual human-readable mathematical equations. This matters enormously for AI safety and trust. Right now, even the people who build AI models can't fully explain why they make specific decisions. SymTorch uses a technique called symbolic regression to crack open that black box, which could have significant implications for everything from medical AI to autonomous systems where explainability is legally and ethically required.

And finally β€” the Supreme Court declined this week to hear a case asking whether AI-generated artwork can be copyrighted. By refusing to take up the appeal, the court effectively left in place the existing rule: no human authorship, no copyright. It's a ruling that has massive implications for artists, AI companies, and the entire creative economy, and yet the court essentially punted on resolving it definitively. Expect this question to resurface β€” soon.

That's your Daily Inference for today. A week where AI moved from your phone to the battlefield and back again, where the ethics of these systems are being tested in real time with real consequences. It's a lot to process, but this is exactly why we're here.

For deeper dives on all of these stories and more, head over to dailyinference.com and subscribe to our daily newsletter. We break it all down every single day so you don't have to. And again, if you need a website built fast, check out our sponsor at 60sec.site β€” AI-powered, live in sixty seconds. Thanks for listening. Stay curious, stay critical, and we'll see you tomorrow.