Reflection AI on Friday raised $2 billion at an $8 billion valuation, surpassing its previous valuation just seven months ago by 15x from $545 million. The initiative aims to position the firm as both an open-source alternative to closed-frontier labs like OpenAI and Anthropic, and a Western equivalent to Chinese AI firms like DeepSeek.
The startup was founded in March 2024 by two former Google DeepMind researchers, Misha Laskin, who led reward modeling for DeepMind’s Gemini project, and Ioannis Antonoglou, who co-created the AlphaGo AI system. The background of the two former Google DeepMind Researchers developing AI systems led to their pitch, which is that the right AI talent can build frontier models outside established tech companies.
Reflection AI’s latest initiative also changes its trajectory, which originally focused on autonomous coding agents, to now being an open source alternative to closed frontier AI labs.
We are bringing the open model frontier back to the US to build a thriving AI ecosystem globally.
Thankful for the support of our investors including NVIDIA, Disruptive, DST, 1789, B Capital, Lightspeed, GIC, Eric Yuan, Eric Schmidt, Citi, Sequoia, CRV, and others. https://t.co/r75YntGnjG
— Misha Laskin (@MishaLaskin) October 9, 2025
Reflection AI has announced that it has onboarded a team of top talent from DeepMind and OpenAI to work on its new initiative. The firm stated that it has developed an advanced AI training stack, which it promises will be open to all. The AI startup added that it has also identified a scalable commercial model that aligns with the company’s open intelligence strategy.
Reflection AI’s CEO, Misha Laskin, revealed that the firm’s team includes 60 members, including AI researchers and engineers across infrastructure, data training, and algorithm development. He also acknowledged that the firm has secured a compute cluster and plans to release a frontier language model in 2026 that’s trained on tens of trillions of tokens.
The AI firm stated that it has developed a large-scale LLM and reinforcement learning platform capable of training massive Mixture-of-Experts (MoE) models at the frontier scale, a feat it claims was once thought possible only within the world’s top labs. Reflection AI claimed it saw the effectiveness of its approach first-hand when the team applied it to the critical domain of autonomous coding. The firm admitted that the unlocked milestone allows it to bring such methods to general agentic reasoning now.
MoEs are specific architectures that power frontier LLMs, which, previously, were only capable of being trained at scale by large, closed AI labs. DeepSeek was the first to figure out how to train such models at scale and in an open way, followed by Qwen, Kimi, and other models in China.
“DeepSeek and Qwen and all these models are our wake-up call because if we don’t do anything about it, then effectively, the global standard of intelligence will be built by someone else. It won’t be built by America.”
-Misha Laskin, CEO of Reflection AI
Laskin also argued that the initiative puts the U.S. and its allies at a disadvantage since enterprises and sovereign states avoid using Chinese models due to potential legal repercussions. He added that companies and sovereign countries can either choose to live at a competitive disadvantage or rise to the occasion.
Reflection AI revealed that it raised significant capital and identified a scalable commercial model that aligns with its open intelligence strategy, which it said ensures the firm can continue building and releasing frontier models sustainably. The AI company said it’s scaling up to build open models that bring together large-scale pretraining and advanced reinforcement learning from the ground up.
David Sacks, the White House AI and Crypto Czar, celebrated Reflection AI’s new mission, saying it’s great to see more American open-source AI models. He believes a significant segment of the global market will prefer the cost, customizability, and control that open source offers.
Co-founder and CEO of Hugging Face, Clem Delangue, believes that the challenge now will be to show high velocity of sharing open AI models and datasets. Laskin revealed that the Reflection AI would release model weights for public use while largely keeping datasets and full training pipelines proprietary. Model weights are core parameters that determine how an AI system works, and Laskin said only a select handful of companies can actually use the infrastructure stack.
Claim your free seat in an exclusive crypto trading community - limited to 1,000 members.