China’s DeepSeek has claimed its flagship AI system, known as R1, was trained for just $294,000, which is a fraction of the sums believed to be spent by US competitors.
The details were published in a peer-reviewed paper in Nature this week, and it is likely to fuel further debate over Beijing’s ambitions in the global artificial intelligence race. The Hangzhou-based company said the reasoning-focused model was trained using 512 Nvidia H800 chips. This hardware was designed specifically for China after the US prohibited sales of the more powerful H100 and A100 processors.
The paper, which was co-authored by founder Liang Wenfeng, marks the first time the firm has disclosed such costs.
In January, the release of DeepSeek’s cheaper AI tools destabilized global markets, resulting in a sell-off in tech shares on fears they could undercut established giants such as Nvidia and OpenAI.
However, Liang and his team have kept a low profile, surfacing only for sporadic product updates ever since.
The reported $294,000 price tag stands in stark contrast to estimates from American firms.
The chief executive of OpenAI, Sam Altman, in 2023 said: “Training foundational models cost much more than $100 million.” However, he did not give out any specific breakdown.
Training large language models involves running banks of powerful chips for extended periods, consuming enormous amounts of electricity while processing text and code. Industry observers have long assumed the bill for such projects runs into the tens or even hundreds of millions.
That assumption is now being challenged, and in a supplementary document, DeepSeek admitted it owns A100 chips and had used them in early development, before moving the full-scale training onto its H800 cluster. According to the tech firm, the model ran for 80 hours during its final training stage.
Even though Nvidia has insisted that the Chinese startup has access only to their H800 processors, American officials remain sceptical. A few months back, US sources told Reuters that DeepSeek illegally owns large volumes of the H100 chips that have export bans to China.
R1 has drawn attention not only for its low training costs but also because it may be the first major model to undergo formal peer review.
“This is a very welcome precedent, and if we don’t have this norm of sharing, it becomes very hard to evaluate risks,” said Lewis Tunstall, a machine-learning engineer at Hugging Face who reviewed the Nature paper.
The review process prompted DeepSeek to clarify technical details, including how its model was trained and what safeguards were in place.
“Going through a rigorous peer-review process certainly helps verify the validity and usefulness of the model,” said Huan Sun, an AI researcher at Ohio State University.
DeepSeek’s key breakthrough was using a pure reinforcement learning approach. Instead of relying on human-curated reasoning examples, according to the paper. The model was rewarded for solving problems correctly and gradually developed its own problem-solving strategies.
The firm says this trial-and-error system allowed R1 to verify its workings without copying human tactics.
“This model has been quite influential,” Sun added. “Almost all reinforcement learning work in 2025 may have been inspired by R1 one way or another.”
Soon after R1’s release, speculation swirled that DeepSeek had leaned on rival outputs, particularly from OpenAI, to accelerate training; however, the company has now flatly denied that charge.
In correspondence with referees, DeepSeek insisted that R1 did not copy reasoning examples generated by OpenAI. However, like most large language models, it was trained on internet text. This means that some AI-produced content was inevitably included, and the explanation has convinced some reviewers.
“I cannot be 100% sure R1 was not trained on OpenAI examples. However, replication attempts by other labs suggest reinforcement learning is good enough on its own.” Tunstall said.
DeepSeek says R1 is built to excel at reasoning-heavy tasks such as coding and mathematics. Unlike most closed systems developed by U.S. firms, it was released as an open-weight model, freely downloadable by researchers. On the AI community site Hugging Face, it has already been downloaded more than 10 million times.
The firm spent around $6 million developing the base model that R1 is built upon, but even with that added, its costs fall well short of the sums associated with rivals. For many in the field, that makes R1 attractive.
Sun and colleagues recently tested the system on scientific data tasks and found it was not the most accurate, but among the best in terms of cost-to-performance.
Don’t just read crypto news. Understand it. Subscribe to our newsletter. It's free.