OpenAI says no user data exposed after TanStack npm supply chain attack hit employee devices

Source Cryptopolitan

OpenAI has admitted that two employee devices were compromised through malicious versions of TanStack npm packages.

The company is insisting that no evidence that user data, production systems, or intellectual property were tampered with was found.

Was OpenAI hacked?

OpenAI has confirmed that malicious actors breached two of its employee devices as part of a massive software supply chain campaign called “Mini Shai-Hulud.”

OpenAI previously deployed controls to limit supply chain attack exposure after an incident with Axios, but the two affected employee devices had not yet received the updated configurations that would have blocked the malicious package download.

The attack targeted TanStack, an open-source library used by millions of developers. The attackers published 84 malicious versions across 42 npm packages, including the popular @tanstack/react-router, which is downloaded over 12 million times weekly.

An external researcher working for StepSecurity detected the malicious packages within roughly 20 minutes of publication and notified npm security directly.

This attack exploited the trust users have in automated build systems. The malicious code was published using TanStack’s own legitimate publishing keys, making it look like an official update.

Mini Shai-Hulud is a self reproducing malware that steals credentials like GitHub tokens, cloud keys, and SSH keys once a developer or CI/CD system installs it. The malware then attempts to republish to other packages the victim maintains.

Security researchers report that the campaign has compromised packages across the npm and PyPI ecosystems. Beyond OpenAI and TanStack, the attack has affected code belonging to Mistral AI, UiPath (NYSE: PATH), OpenSearch and Guardrails AI.

Researchers note that the payload installs a persistent daemon that acts as a “dead-man’s switch.” If a victim revokes a stolen GitHub token, the malware can trigger a command to wipe the user’s home directory.

Was OpenAI’s user data compromised? 

Following the attack, OpenAI enlisted a third-party forensics firm to assist with the investigation. The company said it found no evidence that its user data was accessed or that its production systems, intellectual property or software were compromised.

However, the attackers still managed to extract some credential material from internal code repositories that those devices had access to. This included code-signing certificates for macOS apps.

Now, Mac users must update their ChatGPT Desktop, Codex, and Atlas apps latest by June 12, 2026, or the software will be blocked by macOS security protections.

OpenAI said it has found no evidence of malicious software signed with its certificates and no unauthorized modifications to published applications.

The company noted that new notarization with the old certificates has already been blocked, meaning any fraudulent app attempting to use them would lack Apple’s notarization and be stopped by macOS security protections by default.

Don’t just read crypto news. Understand it. Subscribe to our newsletter. It's free.

Disclaimer: For information purposes only. Past performance is not indicative of future results.
goTop
quote