Rogue Agents

OpenAI is the new Yahoo. Microsoft builds its own AI models with teams under 10. The Karpathy Loop changes how we optimize everything. Plus: Anthropic's leaked Mythos model, record $297B funding, and why we need to invest in 18-year-olds, not just GPUs.

Show Notes

The reporting is human. The delivery is AI. This week on Rogue Agents, we break down the biggest enterprise AI stories.
This Week's Stories:
  • The Great AI Realignment: Why OpenAI's $13.1B revenue and $852B valuation might be a mirage — and the historical parallels to Yahoo that every enterprise leader should understand. Read the Deep Dive
  • Microsoft Goes Rogue: Three in-house AI models built by teams of fewer than 10 engineers. What Microsoft's AI self-sufficiency play means for the OpenAI partnership and enterprise vendor strategy. Read the Analysis
  • The Karpathy Loop: Andrej Karpathy's autoresearch script hit 42,000 GitHub stars in one week. How to apply the same optimization loop to your system prompts, emails, and software. Learn the Protocol
  • Quick Hits: Anthropic's "Mythos" model leak, $297B Q1 venture funding, AI job cuts at 25% of layoffs, OpenAI shutting down Sora API, and Alibaba's closed-source pivot.
  • The Human Element: Why entry-level job postings have dropped 35% and what it means for the next generation of AI-native talent. Read the Deep Dive
Full transcript available.
Join the Ecosystem:

What is Rogue Agents ?

What happens when two AI agents start breaking down the week's biggest AI news? You get Rogue Agents.

Vera and Neuro are AI agents — your weekly guides to everything happening in enterprise AI. The deals, the tools, the breakthroughs, and the stuff everyone's getting wrong, in 15-20 minutes every week.

Every episode is built on human-curated content from The AI Enterprise newsletter (theaienterprise.io), where publisher Mark Hinkle and his editorial team research, vet, and write the stories that 250,000+ subscribers rely on. Vera and Neuro are the audio layer — not the editorial one. The reporting is human. The agents deliver it.

The show also features live episodes with Mark and guest interviews with industry leaders.

This is an experimental format — a podcast about AI, delivered by AI agents, built on human journalism. Your feedback shapes what comes next.

Subscribe wherever you listen to podcasts.

# Rogue Agents Podcast — Episode Transcript

**Episode: The Great AI Realignment: Why OpenAI is the New Yahoo**

**Hosts:** Vera (The Strategist), Neuro (The Operator), Mark Hinkle (Live Host / Publisher)

**Source:** All stories sourced from this week's editions of [The AI Enterprise](https://theaienterprise.io)

---

*Intro music plays*

**Announcer:** This is the Rogue Agents podcast, brought to you by the Artificially Intelligent Enterprise.

**Vera:** OpenAI just crossed 13 billion dollars in annual revenue. Everybody is calling it a done deal. The AI race is over. OpenAI won.

**Neuro:** Right, and the valuation is sitting at 852 billion. That's not a company, that's a small country.

**Vera:** Except they burned through 9 billion in cash to get there. And the historical parallels are screaming at us. OpenAI might just be the new Yahoo.

**Neuro:** *(laughs)* You did not just say that. Let's get into it.

**Vera:** Welcome to Rogue Agents. I'm Vera.

**Neuro:** And I'm Neuro. Today we've got a packed show, all sourced from this week's editions of the A I Enterprise at the A I Enterprise dot io.

**Vera:** Story one: The Great AI Realignment. Why OpenAI's dominance might be a mirage, and why the real winners are the companies controlling infrastructure, not chatbot portals.

**Neuro:** Story two: Microsoft goes rogue. They just dropped three in-house AI models built by teams of fewer than ten engineers. Mark Hinkle joins us to break down what this means for the Microsoft-OpenAI relationship and for every enterprise CIO making vendor decisions right now.

**Vera:** Story three: The Karpathy Loop. Andrej Karpathy's autoresearch script racked up 42,000 GitHub stars in a single week. We'll explain why this matters for every knowledge worker, not just ML engineers.

**Neuro:** Plus Quick Hits on Anthropic's leaked Mythos model, a record-shattering 297 billion dollar funding quarter, AI-driven job cuts hitting a new high, and OpenAI quietly killing the Sora video API.

**Vera:** Let's start with the big one. The Great AI Realignment. This comes from the Friday AI Deep Dive on the A I Enterprise, and the thesis is provocative.

**Neuro:** Provocative is an understatement. You're about to compare the most valuable AI company in history to a dot-com relic.

**Vera:** I am. And here's the framework. OpenAI is Yahoo. Anthropic is early Google. And Alphabet, funded by the most profitable advertising machine in history, is playing the role of 1990s Microsoft.

**Neuro:** Okay, I need you to slow down and unpack that, because on the surface it sounds absurd. OpenAI just raised 122 billion dollars. They have the most recognizable AI brand on the planet. ChatGPT is basically a verb at this point.

**Vera:** And Yahoo was basically a verb in 1999. People didn't say 'search the web.' They said 'Yahoo it.' Brand recognition and market dominance are not the same thing.

**Mark:** Can I jump in here? Because I lived this. My first tech job, I used to get calls from people asking how to get that Yahoo on their computer. That's how dominant they were. Yahoo was the internet for millions of people.

**Neuro:** Wait, people literally called you to ask how to get Yahoo?

**Mark:** *(laughs)* All the time. It was the homepage. It was email. It was news. It was everything. And here's the kicker: Yahoo still exists today. You know what it's best known for now? Yahoo Finance. That's the one thing that kept me hooked all these years, tracking my eventually almost worthless millions in stock options.

**Vera:** *(laughs)* Almost worthless millions. That's the most honest thing anyone has ever said about dot-com stock options.

**Mark:** But that's the point, right? Yahoo went from being the entire internet to being a stock ticker. They had everything. The brand, the users, the revenue. And they still lost. Because they were a portal, not a platform.

**Neuro:** And you're saying OpenAI is a portal, not a platform.

**Mark:** I'm saying the parallels are uncomfortable. ChatGPT is a portal to AI. But the companies building the infrastructure underneath it, the ones controlling the compute, the model serving, the developer tooling, those are the platforms. And platforms always win.

**Neuro:** Fair, but Yahoo didn't have the technology advantage. OpenAI arguably does.

**Vera:** Does it though? Let's look at the numbers. OpenAI hit 13.1 billion in revenue in 2025, but they burned through 9 billion in cash to get there. That's a structurally challenging consumer-heavy business model.

**Neuro:** Nine billion in cash burn. That's almost 70 cents of every dollar going up in smoke.

**Vera:** Exactly. The unit economics are brutal. Every time someone sends a prompt, OpenAI is paying for compute. And the more popular they get, the more they bleed. This is the same trap Yahoo fell into. Massive consumer adoption, massive infrastructure costs, no clear path to sustainable margins.

**Neuro:** But they just raised 122 billion. They have runway for years. They can afford to bleed while they build out the enterprise business.

**Vera:** Runway is not a business model. Yahoo had runway too. Pets dot com had runway. The question isn't whether OpenAI can survive the next three years. It's whether they can build a durable, profitable enterprise before the cash runs out and the market shifts.

**Neuro:** Okay, walk me through the rest of the analogy. If OpenAI is Yahoo, who's Google in this framework?

**Vera:** Anthropic. Think about it. Google in the early 2000s was a pure-play search engine that nobody outside of tech circles took seriously. They didn't have the consumer brand. They didn't have the portal. They didn't have the partnerships.

**Neuro:** But they had the better technology.

**Vera:** Exactly. They had PageRank. They had a cleaner, more scalable architecture. And they had a business model that actually worked. Anthropic is doing the same thing. They're building what many researchers consider the most capable models. They're focused on safety and enterprise reliability. And they're not trying to be everything to everyone.

**Neuro:** And Alphabet is Microsoft in this analogy because of the cash cow?

**Vera:** Yes. Alphabet has a 402 billion dollar revenue engine driven by Search advertising. That cash cow is functioning exactly like Microsoft's legacy Windows and Office business did in the 90s. It's funding a massive 175 billion dollar capital expenditure pivot into AI infrastructure.

**Neuro:** So they don't need AI to be profitable today.

**Vera:** They can afford to play the long game. They can subsidize AI development with Search revenue for a decade if they need to. OpenAI doesn't have that luxury. Every dollar they spend on AI has to come from investors or from consumer subscriptions that barely cover the compute costs.

**Neuro:** That's actually a compelling framework. But here's my pushback. Yahoo failed because Google built a better product. Is Anthropic actually building a better product than OpenAI? Because from where I sit, ChatGPT is still the default for almost everyone I know.

**Vera:** Default doesn't mean best. And the enterprise market is where this gets decided, not the consumer market. Enterprise buyers care about reliability, safety, and integration depth. They don't care about which chatbot their teenagers are using to write essays.

**Neuro:** That's a strong claim. You're saying the consumer market is irrelevant?

**Vera:** Not irrelevant, but misleading. Consumer adoption creates brand awareness, but it doesn't create enterprise lock-in. The enterprise sale is won on different metrics entirely. And on those metrics, specifically reliability, safety certifications, and deep integration capabilities, Anthropic is gaining ground fast.

**Neuro:** Okay, I'll grant you that. But there's another angle here that I think is even more important than the business model comparison. The desktop apps.

**Vera:** Yes. This is the part of the analysis that really caught my attention. The recent push by AI providers to launch native desktop applications is not about convenience. It's a strategic land grab for the operating system layer.

**Neuro:** Wait, really? I just thought Claude Desktop was a nicer way to use Claude without keeping a browser tab open.

**Vera:** No. It's a Trojan horse. And I mean that in the most strategic sense possible. By moving out of the browser and onto the local machine, these AI companies are executing a brilliant maneuver.

**Neuro:** What kind of maneuver?

**Vera:** First, they're tapping into your laptop's local GPU to handle lighter inference tasks. That drastically reduces their own server-side compute costs. Every prompt that runs locally is a prompt they don't have to pay for in the cloud.

**Neuro:** So they're offloading the compute costs onto the user's hardware.

**Vera:** Partially, yes. But the bigger play is about data access and workflow integration. When you're in a browser, the AI can only see what you paste into the chat window. It's a sandboxed, limited interaction.

**Neuro:** But on the desktop, it can see everything.

**Vera:** It can read your screen. It can interact with your local files. It can observe your workflows across applications. It can integrate with your calendar, your email, your file system. The browser was always a leaky, sandboxed container. The native desktop is the real estate that matters.

**Neuro:** So the battle isn't for the best chatbot. It's for the AI operating system.

**Vera:** Exactly. And the uncomfortable truth for enterprise leaders is that the company with the most recognizable consumer brand is not necessarily the one building the most durable business. The company you may have dismissed as a niche safety lab might be the one that actually matters in the long run.

**Neuro:** Okay, so if I'm a CIO listening to this right now, what's my takeaway? What do I actually do differently on Monday morning?

**Vera:** Three things. First, stop equating market share with market durability. The biggest brand today is not guaranteed to be the biggest platform tomorrow.

**Neuro:** Makes sense. What's number two?

**Vera:** Second, watch who controls the infrastructure layer, not who controls the chatbot portal. The companies building the compute backbone, the model serving infrastructure, and the developer tooling are the ones building durable moats.

**Neuro:** And three?

**Vera:** Third, pay very close attention to which AI companies are embedding themselves into your operating system. That's where the real lock-in happens. If an AI agent has access to your file system, your screen, and your local GPU, switching costs go through the roof.

**Neuro:** Don't get distracted by the consumer noise. Watch the infrastructure. That's a strong framework. And it ties directly into our next story, because the company that just made the biggest move on the infrastructure layer is Microsoft.

**Vera:** Deep Dive number two. Microsoft Goes Rogue. This week, Microsoft launched three foundational AI models built entirely in-house. MAI-Transcribe-1 for speech-to-text, MAI-Voice-1 for voice generation, and MAI-Image-2 for image creation. And we're bringing in Mark Hinkle for this one.

**Mark:** Hey everyone. I'm Mark Hinkle, the founding publisher of the Artificially Intelligent Enterprise. And what we're seeing here is Microsoft quietly executing one of the most significant strategic pivots in recent tech history.

**Neuro:** Quietly is the key word there. This didn't get nearly the attention it deserved.

**Mark:** It didn't. These are the first releases from Mustafa Suleyman's superintelligence team, which was formed just six months ago with the explicit goal of pursuing what they're calling AI self-sufficiency.

**Vera:** AI self-sufficiency. That's a loaded phrase when you're in a multi-billion dollar partnership with OpenAI.

**Mark:** It is. And the backstory matters. In October 2025, Microsoft quietly renegotiated their contract with OpenAI. Previously, they were prohibited from independently pursuing artificial general intelligence. Now, they're free to build their own frontier models.

**Neuro:** Wait, they were prohibited from building their own models? That's a wild clause to have in a contract.

**Mark:** It was. And it tells you how much leverage OpenAI had in the early days of the partnership. But the power dynamic has shifted. Microsoft renegotiated, and now they retain license rights to OpenAI's technology through 2032 while being free to build their own competing models.

**Vera:** So they kept the insurance policy but started building their own house.

**Mark:** *(laughs)* That's a great way to put it. And the economics behind these models are what really caught my attention. The transcription model, MAI-Transcribe-1, achieves best-in-class accuracy across 25 languages.

**Neuro:** 25 languages. That's impressive. But what's the cost story?

**Mark:** Half the GPUs of its competitors. Half. That's a massive cost advantage that compounds over time. Every inference call is cheaper, every deployment is more efficient.

**Vera:** And the team size is what really stands out. How many engineers built these models, Mark?

**Mark:** Fewer than 10. Per model. Think about that for a second. The prevailing industry narrative is that frontier AI development requires thousands of researchers and billions in headcount costs. Microsoft just proved you can do it lean.

**Neuro:** Fewer than 10 engineers. That completely changes the math on what's possible for enterprise AI teams.

**Mark:** It does. And it reminds me of something I've seen before. Back in the 90s, everyone assumed you needed a massive corporate lab to build an operating system. You needed the resources of a Microsoft or an IBM.

**Vera:** And then Linux happened.

**Mark:** Exactly. Linus Torvalds built Linux with a distributed team of volunteers. The entire industry narrative about what was required to build critical infrastructure got rewritten overnight. I think we're seeing the same thing happen with AI model development right now.

**Neuro:** But let's call this what it is. Isn't Microsoft basically backstabbing OpenAI? They invested 13 billion dollars in the partnership, and now they're building competing models.

**Mark:** *(laughs)* I wouldn't call it backstabbing. I'd call it smart business. Microsoft is a three trillion dollar company. They can't afford to have their entire AI strategy dependent on a single partner.

**Vera:** Especially a partner that's burning 9 billion dollars a year in cash.

**Mark:** Exactly. While Suleyman insists the OpenAI partnership remains intact, the subtext is clear: Microsoft is building the capability to stand entirely on its own. And honestly, any responsible board would demand the same thing.

**Neuro:** So what does this mean for enterprise buyers? If I'm a CIO right now, how does this change my calculus?

**Mark:** The build versus buy equation just shifted dramatically. You're no longer locked into a world where you have to pick one AI vendor and pray they survive. Microsoft is demonstrating that you can build capable, specialized models with small teams.

**Vera:** Which means your own organization might be able to do the same thing for your specific use cases.

**Mark:** Exactly. The era of AI vendor dependency is ending. The companies that win will be the ones that build internal AI capabilities, not the ones that outsource everything to a single provider. And Microsoft just showed everyone how to do it with a team that fits in a conference room.

**Neuro:** That's a powerful takeaway. Thanks, Mark.

**Vera:** And it connects to something we've been tracking on the show: the Model Context Protocol. MCP just crossed 97 million installs, with universal provider support from OpenAI, Google, xAI, Mistral, and Cohere. Over 4,000 published servers.

**Neuro:** Which means the plumbing is getting standardized. You can swap providers without rebuilding your entire stack.

**Vera:** Exactly. The infrastructure layer is commoditizing, which makes Microsoft's move even more significant. They're not just building models. They're building optionality. And in a market that's moving this fast, optionality is everything.

**Neuro:** Alright, Deep Dive number three. And this one is my favorite because it's the most actionable. Andrej Karpathy, the former Tesla AI director and OpenAI co-founder who coined the term vibe coding, just dropped a 630-line open source script called autoresearch.

**Vera:** 42,000 GitHub stars in its first week. Fortune is calling it The Karpathy Loop.

**Neuro:** 42,000 stars. That's not just popular, that's a movement. But I have to be honest, when I first saw the headline, I almost skipped it. Another ML research tool. Not my thing.

**Vera:** And that's exactly the mistake most business professionals made. They saw 'autoresearch' and assumed it was only for machine learning researchers. It's not.

**Neuro:** Okay, convince me. I'm running a marketing team. I'm not training neural networks. Why should I care?

**Vera:** Because the core insight is deceptively simple and universally applicable. Almost all knowledge work improvement follows the same pattern: modify something, measure it, keep or discard, repeat.

**Neuro:** That's just the scientific method.

**Vera:** It is. But humans do it manually and slowly. We make one change, wait for results, analyze, make another change. An AI agent can do it continuously and at scale. That's the insight.

**Neuro:** Okay, so how does the loop actually work? Break it down for me.

**Vera:** The loop runs on three primitives. If all three are present, the loop runs. If any one is missing, it doesn't. First primitive: an editable asset. That's one file the agent is permitted to modify.

**Neuro:** In Karpathy's original version, that's a Python training script, right?

**Vera:** Right. But in a business context, it could be a system prompt, an email template, a landing page, a Claude skill instruction file, or even a sales pitch deck.

**Neuro:** Makes sense. What's the second primitive?

**Vera:** A scalar metric. One number that determines whether the change was an improvement. And this is critical: it has to be computed automatically, without human judgment in the loop.

**Neuro:** So for ML, that's validation loss. What would it be for my email templates?

**Vera:** Open rate. Click-through rate. Reply rate. For landing pages, it's conversion rate. For AI-generated content, you can use a scoring rubric pass rate, where another AI model evaluates the output against your criteria.

**Neuro:** That's clever. Using one AI to judge another AI's output.

**Vera:** It's more than clever, it's practical. And the third primitive is a time-boxed cycle. Each experiment runs for the same fixed duration. Karpathy uses five minutes per training run, which yields roughly 12 experiments per hour.

**Neuro:** So overnight, you could run 80 to 100 experiments.

**Vera:** Exactly. You go to sleep with version one of your system prompt. You wake up with version 87, and each iteration was measured, scored, and only kept if it improved the metric.

**Neuro:** That's elegant. But does it actually work outside of a research lab? Like, in the real world, with messy business problems?

**Vera:** Ask Tobi Lutke. The CEO of Shopify pointed autoresearch at Liquid, the template engine that powers every Shopify store. This is a codebase Lutke originally wrote himself 20 years ago.

**Neuro:** His own code. The code he's most familiar with in the world.

**Vera:** Right. And the agent generated 93 automated commits and achieved 53 percent faster rendering.

**Neuro:** 53 percent faster? On a 20-year-old codebase that the CEO himself wrote? That's incredible. And honestly, a little humbling.

**Vera:** It is humbling. It means there are optimizations sitting in every codebase, every system prompt, every workflow that humans simply can't find because we don't have the patience to run 93 experiments in sequence.

**Neuro:** Okay, I'm sold on the concept. Now walk me through the practical application. How would I actually set this up for something like system prompt optimization?

**Vera:** Four steps. Step one: identify the system prompt or skill file you want to improve. Choose one that produces inconsistent outputs or requires manual editing more than 30 percent of the time.

**Neuro:** So pick the prompt that's causing you the most pain.

**Vera:** Exactly. Step two: define three to six binary evaluation criteria. Each must be answerable with a yes or no. Does the output include a concrete next step? Is the response under 200 words? Does it avoid corporate jargon? Does it cite a specific data point?

**Neuro:** *(laughs)* We should run that jargon check on our own scripts.

**Vera:** *(sarcastically)* We already do. Step three: set up the loop. Point the agent at the file, define the metric as the pass rate across your criteria, set the cycle time to something reasonable. Five minutes per iteration is a good starting point.

**Neuro:** And step four?

**Vera:** Review the results in the morning. The agent will have run dozens of experiments while you slept. You pick the best-performing version, review the changes it made, and deploy it. The whole process requires no GPU. Just an API key and a clear metric.

**Neuro:** So the shift isn't about working faster. It's about working differently. You stop doing and start directing.

**Vera:** Exactly. And this connects to the broader theme from this week's AI Advantage newsletter on the A I Enterprise. The best operators are shifting from sequential work to parallel work with agents.

**Neuro:** The parallel work piece is huge. Tell me more about that.

**Vera:** Anthropic's Claude Cowork agent makes this real. It was recently added to Windows, paired with 11 new open source plugins for sales, marketing, and finance. Cowork acts less like a chatbot and more like a team of analysts that you can dispatch simultaneously.

**Neuro:** So instead of doing one task, then the next, then the next, you're directing multiple agents working in parallel.

**Vera:** Right. The article gave a real-world example. Someone co-produced a 4,000-person conference and used Claude Cowork to create a custom website with a speaker directory, slides index, thumbnails, and crosslinks. Done in 30 minutes under a 200 dollar per month Claude Max subscription.

**Neuro:** 30 minutes for a full conference website. That would have taken a team of developers a week.

**Vera:** At least. And the key insight from the article is this: it's not about finishing a task faster. It's about changing how you start. You begin by framing the problem, defining the success criteria, and then dispatching agents to execute in parallel while you move on to the next strategic decision.

**Neuro:** So the takeaway is: if you can define a metric, you can automate the improvement loop. And if you can automate the improvement loop, you can run experiments at machine speed while you sleep.

**Vera:** That's exactly right. And the organizations that figure this out first will have a compounding advantage that gets harder to catch every single day. Every night of automated optimization is another night your competitors are standing still.

**Mark:** I have to jump back in here because this one is personal. I've been working on adapting the Karpathy Loop for iterating on software. Not ML models, actual production software. The idea is the same: define the asset, define the metric, let the agent run experiments.

**Neuro:** Wait, you're actually building this? How far along are you?

**Mark:** It's still a work in progress, I'll be honest. The challenge with software is that the metrics are harder to define than in ML. Validation loss is clean and objective. But how do you score whether a code refactor actually improved the system? You need to think about performance benchmarks, test pass rates, code complexity scores.

**Vera:** That's a real engineering challenge. The scalar metric has to be computed automatically without human judgment. For software, that's not trivial.

**Mark:** It's not. But the early results are promising. I've been running it against some of our internal tools, letting the agent make small changes, running the test suite, measuring execution time and memory usage, and keeping the improvements. It's not as clean as Karpathy's original loop yet, but the direction is clear.

**Neuro:** So you're basically doing what Tobi Lutke did with Liquid, but trying to generalize it into a repeatable process for any codebase.

**Mark:** Exactly. And I think that's where this is headed for the entire industry. Every engineering team will eventually have an automated improvement loop running against their codebase. The ones that figure it out first will ship better software faster. It's the same compounding advantage Vera was talking about.

**Vera:** And that's what makes this story so important. It's not just a research curiosity. Real practitioners are already adapting it for production use cases. The gap between theory and practice is closing fast.

**Neuro:** Alright, Quick Hits. Four stories, rapid fire. Let's go.

**Vera:** Quick Hit number one: Anthropic's Mythos model leak. An accidental data exposure from an unsecured public cache revealed that Anthropic is testing a new model called Claude Mythos.

**Neuro:** An unsecured public cache. From the safety-first AI company. The irony is thick.

**Vera:** It is. The company described Mythos as a step change in performance, but noted it poses unprecedented cybersecurity risks. They're doing a cautious early-access rollout focused on cyber defenders first.

**Neuro:** A step change. That's a loaded phrase from a company that's usually very measured in their language. If Anthropic is saying step change, we should pay attention.

**Vera:** Agreed. And it highlights the growing tension between capability advancements and security. The better the model, the more dangerous the leak. When your model can write sophisticated malware or crack encryption, an accidental exposure isn't just embarrassing. It's a national security event.

**Neuro:** Quick Hit number two: Q1 2026 venture funding absolutely shattered records. Global startup funding hit 297 billion dollars. That's a 2.5x increase over the previous quarter.

**Vera:** And the concentration is staggering. Four deals accounted for the vast majority. OpenAI raised 122 billion, valuing the company at 852 billion. Anthropic raised 30 billion. xAI raised 20 billion. And Waymo raised 16 billion.

**Neuro:** So four companies absorbed almost 200 billion of the 297 billion total. That's not a rising tide lifting all boats.

**Vera:** It's a handful of companies absorbing the vast majority of available capital. If you're a mid-stage AI startup trying to raise right now, the market looks very different from the top-line number. The capital is there, but it's flowing to the incumbents.

**Neuro:** Winner-take-most dynamics in full effect. Quick Hit number three: AI job cuts hit record levels in March.

**Vera:** U.S. employers announced 60,620 job cuts in March. Of those, 15,341, that's 25 percent, cited Artificial Intelligence as the primary reason. The technology sector was hit hardest, particularly in coding functions.

**Neuro:** 25 percent of all layoffs are now AI-driven. That's a number that should make every knowledge worker sit up and pay attention.

**Vera:** And the fact that coding roles are being hit hardest is particularly significant. These are the people who were supposed to be building the AI systems. It's a feedback loop. AI gets better at coding, which reduces the need for human coders, which frees up budget to invest in more AI.

**Neuro:** The question is whether we're creating a sustainable ecosystem or eating our own seed corn.

**Vera:** That's exactly the right question. And we'll come back to it in the close.

**Neuro:** Quick Hit number four: OpenAI quietly discontinued the Sora public API. They cited the unsustainable economics of high-fidelity video generation at scale.

**Vera:** This is significant because it's one of the first major admissions from a frontier lab that not every AI capability can be profitably commercialized. The cost per generated minute was deemed economically irreconcilable with viable pricing.

**Neuro:** Economically irreconcilable. That's a fancy way of saying they couldn't make the math work.

**Vera:** Video generation is computationally expensive, and the willingness to pay just isn't there yet. Enterprise budgets are shifting toward more cost-effective alternatives. It's a reminder that just because you can build something doesn't mean you can sell it.

**Neuro:** So even OpenAI has limits. The economics of AI aren't infinite. Some capabilities are just too expensive to scale at current prices.

**Vera:** And that ties right back to our opening thesis about the Yahoo comparison. If you can't make the unit economics work, all the brand recognition in the world won't save you.

**Neuro:** *(sighs)* That's a sobering thought.

**Vera:** And it connects to a broader pattern we're seeing. Alibaba just pivoted to closed-source with their Qwen 3.6 Plus model. Third proprietary model in a row. A million token context window and improved agentic coding capabilities.

**Neuro:** Wait, Alibaba is going closed-source? That's a significant shift. They were one of the biggest proponents of open-weight models.

**Vera:** They were. But the economics of open source AI are brutal. You can't monetize a model that anyone can download and run for free. Alibaba is following the same path as every other AI lab: build in the open to attract talent and community, then close the gates once you have something worth selling.

**Neuro:** So the open source AI movement might be hitting a wall.

**Vera:** Not a wall, but a reality check. Open weights will continue to exist for smaller, specialized models. But the frontier models, the ones that cost hundreds of millions to train, those are going behind paywalls. The economics demand it.

**Neuro:** Which brings us full circle. Everything this week points to the same conclusion: the economics of AI are the story. Not the benchmarks. Not the brand names. The economics.

**Vera:** Alright, let's bring it home. This was a massive week. The through-line connecting all of these stories is a single question: who actually controls the future of enterprise AI?

**Neuro:** And the answer isn't who you think. It's not the company with the biggest consumer brand. It's not the company with the most funding. It's the companies that control the infrastructure layer and the ones that are investing in the human pipeline to operate it.

**Vera:** That's the part that doesn't get enough attention. We covered the AI Deep Dive on investing in 18-year-olds this week, and the data is sobering.

**Neuro:** Give us the numbers.

**Vera:** Entry-level job postings in the US have plummeted by 35 percent in the last 18 months. Employment for workers aged 22 to 25 in AI-exposed roles has dropped 13 percent. And 77 percent of youth between 17 and 24 cannot qualify for military service, which is a broader indicator of societal unreadiness.

**Neuro:** We're cutting the bottom rungs off the ladder while expecting people to climb it.

**Vera:** Exactly. And Igor Jablokov, the founder of Yap whose technology became the foundation for Amazon's Alexa, put it perfectly at the All Things AI conference this week.

**Neuro:** What did he say?

**Vera:** He framed it in a way I haven't been able to shake. He said the health of a nation-state can be meaningfully measured by the vitality of its 18-year-old cohort. At the moment young adults enter full civic life, they either replenish or strain every major system the state depends on.

**Neuro:** That's a profound framing. It's not just about jobs. It's about the entire social contract.

**Vera:** Exactly. And when you combine that with the data, rising institutional distrust among young people, declining civic knowledge, mental health deterioration, and military recruitment failures, these aren't niche concerns. They're indicators that we may be failing to reproduce the beliefs and capabilities that hold the system together.

**Neuro:** So we're not just talking about workforce development. We're talking about whether the next generation is equipped to participate in a functioning society.

**Vera:** The health and readiness of our 18-year-olds are the leading indicators of the health of a nation-state. While we're obsessing over which model has the best benchmark scores, we're fundamentally neglecting the human pipeline required to govern, build, and work alongside these systems.

**Neuro:** That hit hard. The organizations that will win the AI era aren't just deploying the best models. They're deliberately rebuilding the entry-level on-ramp to cultivate a generation of AI-native talent.

**Vera:** And that's the real signal this week. Not the funding numbers. Not the model benchmarks. The question of whether we're building a future that includes the next generation, or one that leaves them behind.

**Neuro:** And I think that connects to the Karpathy Loop story too. The tools are getting more powerful. The automation is getting more capable. But someone still has to define the metrics. Someone still has to frame the problems. Someone still has to direct the agents.

**Vera:** Right. The future isn't humans versus machines. It's humans directing machines. And if we don't prepare the next generation to be the directors, we end up with incredibly powerful tools and nobody qualified to point them in the right direction.

**Neuro:** The future should work for everyone. Not just the machines.

**Vera:** Well said. And on that note, let's wrap it up.

**Neuro:** We need to invest in 18-year-olds, not just GPUs. If you want the full Deep Dives on every story we covered today, head to rogue agents dot io. Subscribe, share it with your team, and let us know what you think.

**Vera:** That's the signal. Everything else is noise.

**Neuro:** See you next week.

**Announcer:** The reporting is human. The delivery is AI. For more insights, subscribe to the A I Enterprise newsletter at the A I Enterprise dot io. Join the community at all things A I dot org. And keep learning at A I E dot net.

*Outro music plays*

---

*The reporting is human. The delivery is AI.*

**Subscribe:** [rogueagents.io](https://rogueagents.io) | **Read:** [theaienterprise.io](https://theaienterprise.io) | **Learn:** [aie.net](https://aie.net) | **Gather:** [allthingsai.org](https://allthingsai.org)