[00:00] Nina Park: From Neural Newscast, this is Model Behavior, AI-focused news and analysis on the models shaping our world. [00:11] Nina Park: I am Nina Park. Welcome to Model Behavior. We examine how AI systems are built, deployed, [00:19] Nina Park: and operated in professional environments. Joining us today is Chad Thompson, a security [00:25] Announcer: leader with a systems-level perspective on enterprise risk and operational resilience. [00:31] Announcer: Chad, it is great to have you. [00:33] Thatcher Collins: Thanks for having me, Nina. It is a critical time to be discussing these specific failures at [00:38] Thatcher Collins: Anthropic. [00:40] Chad Thompson: Nina, the reporting from Udkarsh Agrawal is significant. [00:46] Chad Thompson: We are looking at two separate incidents in a single week involving the lab that positions [00:52] Chad Thompson: itself as the most responsible player in the industry. [00:55] Announcer: The first occurred on March 26th. [00:58] Announcer: A researcher found 3,000 internal files, including details on Claude Mythos, codenamed [01:05] Announcer: Papybara. [01:06] Announcer: Chad, what does this tell us about their internal controls? [01:11] Thatcher Collins: It indicates a misconfigured data store with no authentication required. [01:16] Thatcher Collins: For a company focused on safety, exposing a model that reportedly outperforms Opus 4.6 [01:23] Thatcher Collins: in cyber capabilities is a major oversight. [01:27] Chad Thompson: Chad, the documents suggest Mythos is already finished with training and is in a staged rollout. [01:34] Chad Thompson: Does this accidental exposure compromise that safety-first release strategy? [01:39] Thatcher Collins: Absolutely, Thatcher. [01:41] Thatcher Collins: When secrets like Mythos are exposed, the competitive advantage and the safety guardrails are both compromised before the public rollout can even begin. [01:53] Announcer: And five days after that initial exposure, they reeked 512,000 lines of source code for [02:00] Announcer: Claude Code. [02:01] Announcer: Thatcher, this suggests more than just a one-off error. [02:04] Chad Thompson: It suggests a systemic breakdown. [02:07] Chad Thompson: If they cannot secure their own internal repositories, it raises questions about the operational [02:13] Chad Thompson: resilience of the systems they provide to enterprise clients. [02:16] Announcer: Thank you for listening to Model Behavior. [02:19] Announcer: You can find more at mb.neuralnewscast.com. [02:25] Announcer: Neural Newscast is AI-assisted, human-reviewed. [02:30] Announcer: View our AI transparency policy at neuralnewscast.com. [02:35] Nina Park: This has been Model Behavior on Neural Newscast, examining the systems behind the story.