Anthropic and OpenAI tighten security as AI models show advanced hacking ability

Source Cryptopolitan

Artificial intelligence companies, Anthropic and OpenAI, are taking serious steps to address the growing risks associated with their products. Altman’s firm released models exclusively for experts to help defend vulnerable systems, while Anthropic is now requiring ID verification before users can access certain functions. 

When AI models were initially released to the public, they were used to turn text into Ghibli-style art and write shopping lists, but artificial intelligence has quickly become a national security concern. 

Why is Anthropic asking for my driver’s license?

Hackers are already using AI to bypass defense systems, forcing Anthropic to roll out a mandatory identity verification process. Users now need a physical government ID (passport or driver’s license) and a live selfie to use specific functions.

Their partner, Persona, handles the data. Anthropic has clarified that it will not use users’ identity data to train its AI models. The company also clarified that verification is necessary to “prevent abuse, enforce our usage policies, and comply with legal obligations.” 

If a user fails the test or tries to use the system from an unsupported location, their account can be banned.

The sudden crackdown is due to Anthropic’s admission that their new model, Claude Mythos Preview, is terrifyingly good at hacking. 

In a blog post released alongside the verification news, the company stated that Mythos Preview is “capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser when directed by a user to do so.”

Engineers at Anthropic, with no formal security training, asked Mythos to find remote code execution vulnerabilities overnight. According to the company, they “woke up the following morning to a complete, working exploit.”

Are the new AI models actually dangerous?

The UK’s AI Security Institute (AISI) published an evaluation confirming that Mythos represents a “step up” in cyber capabilities.

Anthropic’s internal blog post provides the most alarming details about the model’s capabilities. Mythos, after receiving the initial prompt, found a 27-year-old bug in OpenBSD, an operating system known for being secure. 

Mythos also found a 16-year-old bug in FFmpeg, a video tool used by almost every major service. The tool has been tested by millions of random inputs in a technique called fuzzing, yet Mythos found a vulnerability in the H.264 codec that dates back to a 2003 commit. 

Beyond that, Mythos found a 17-year-old vulnerability in FreeBSD’s NFS server and wrote an exploit that allows any unauthenticated user on the internet to gain full root access to the server. 

The company confirmed that Mythos Preview “fully autonomously identified and then exploited this vulnerability.” The entire process cost under $2,000 at API pricing and took less than a day.

Mythos found vulnerabilities in every major web browser. In one case, it wrote a browser exploit that chained together four vulnerabilities, including a JIT heap spray, to escape both the browser’s renderer sandbox and the operating system’s sandbox. 

Anthropic has found “thousands of additional high- and critical-severity vulnerabilities” across open source and closed source software. Over 99% of these bugs have not yet been patched. 

OpenAI’s approach to security risks 

Despite these problems, OpenAI has announced the release of GPT-5.4-Cyber, which, unlike standard models that refuse to help with hacking for safety reasons, “lowers the refusal boundary for legitimate cybersecurity work.”

GPT-5.4-Cyber can analyze compiled software without access to the source code to detect malware and vulnerabilities, but access is limited to OpenAI’s “Trusted Access for Cyber” (TAC) program. Only vetted cybersecurity experts, researchers, and organizations defending critical systems can use it.

Anthropic’s Project Glasswing also gives limited access to defenders at companies like Amazon ($AMZN), Apple ($AAPL), and Google ($GOOGL) to fix critical infrastructure before attackers can exploit it. 

In the meantime, Anthropic suggests installing security updates immediately, rather than on a monthly schedule.

The smartest crypto minds already read our newsletter. Want in? Join them.

Disclaimer: For information purposes only. Past performance is not indicative of future results.
placeholder
XRP Is At A Critical Decision Point, But Can Price Still Rally To $2?Crypto analyst Stephanie has stated that XRP is at a critical decision point, noting that the altcoin could still rally to $2. She also outlined the bearish scenario, in which XRP could still drop
Author  NewsBTC
17 hours ago
Crypto analyst Stephanie has stated that XRP is at a critical decision point, noting that the altcoin could still rally to $2. She also outlined the bearish scenario, in which XRP could still drop
placeholder
Bitcoin Price Breaks Higher: What The Market Data Says Could Happen NextThe Bitcoin price is bouncing back strongly amid growing hopes for a potential shift in the standoff between the US and Iran. So far, BTC has gained roughly 10% in the weekly time frame. This pushed
Author  NewsBTC
17 hours ago
The Bitcoin price is bouncing back strongly amid growing hopes for a potential shift in the standoff between the US and Iran. So far, BTC has gained roughly 10% in the weekly time frame. This pushed
placeholder
Stablecoin bill removes tax on everyday payments if value stays near $1 pegStablecoin tax treatment in the U.S. is at the center of a new legislative push to exempt qualifying daily transactions involving regulated payment stablecoins from tax. The latest version of the PARITY Act would stop gain or loss recognition on certain stablecoin sales unless a taxpayer’s basis falls below 99% of the token’s redemption value, […]
Author  Cryptopolitan
17 hours ago
Stablecoin tax treatment in the U.S. is at the center of a new legislative push to exempt qualifying daily transactions involving regulated payment stablecoins from tax. The latest version of the PARITY Act would stop gain or loss recognition on certain stablecoin sales unless a taxpayer’s basis falls below 99% of the token’s redemption value, […]
placeholder
Polygon launches sPOL liquid staking token to unlock native DeFiPolygon Labs has launched sPOL, a native liquid staking token (LST) and it is designed to mobilize more than 3.6 billion staked POL into the network’s DeFi ecosystem.  sPOL is the first liquid staking token built directly by Polygon Labs and it is backed by a 100 million sPOL treasury commitment to seed liquidity from […]
Author  Cryptopolitan
17 hours ago
Polygon Labs has launched sPOL, a native liquid staking token (LST) and it is designed to mobilize more than 3.6 billion staked POL into the network’s DeFi ecosystem.  sPOL is the first liquid staking token built directly by Polygon Labs and it is backed by a 100 million sPOL treasury commitment to seed liquidity from […]
placeholder
Goldman Sachs Targets BTC Yield With New Bitcoin Income ETFGoldman Sachs filed with the SEC on April 14 to launch a Bitcoin Premium Income ETF, the bank’s first proprietary Bitcoin (BTC) fund product.The filing adds Goldman to a growing list of Wall Street ba
Author  Beincrypto
17 hours ago
Goldman Sachs filed with the SEC on April 14 to launch a Bitcoin Premium Income ETF, the bank’s first proprietary Bitcoin (BTC) fund product.The filing adds Goldman to a growing list of Wall Street ba
goTop
quote