TradingKey - Months after previously withdrawing its listing application, AI chipmaker Cerebras Systems has reapplied for an IPO, filing its application to list on the Nasdaq on Friday Eastern Time under the ticker symbol "CBRS".
According to Bloomberg, Cerebras had already filed for an IPO confidentially as early as March this year, with the offering expected to raise approximately $2 billion.
Cerebras is a Sunnyvale, California-based AI chipmaker and data center operator currently viewed as one of Nvidia's (NVDA) most formidable challengers in the AI computing space. This is because despite its small size, the company competes with Nvidia through differentiation.
In the technology field, Cerebras' core competitive strategy involves challenging Nvidia with ultra-large chips. Unlike Nvidia's approach of dicing wafers into hundreds of GPUs, Cerebras chooses to keep an entire 12-inch wafer to produce a single giant chip. This is a wafer-scale chip utilizing an on-chip memory architecture, which integrates processor cores and massive memory capacity on the same wafer to completely eliminate data transfer latency.
Cerebras' flagship product, the WSE-3 chip, has an area of 46,225 square millimeters—larger than a dinner plate—and integrates 4 trillion transistors (50 times that of the H100 chip) and 900,000 AI-optimized cores. Cerebras claims that compared to GPUs, its WSE-3 chip runs faster and at a lower cost. The company emphasizes the high-speed performance of its massive processors, especially in responding to end-user queries.
This giant chip can be used not only for running AI models with faster speeds and lower latency but also for training them. Complex tasks that originally required hundreds of GPUs can be executed on just a few WSE-3 chips with lower performance loss.
Nvidia's response suggests that Cerebras' differentiated advantages may be having a real impact. Last year, Nvidia signed a $20 billion licensing and talent acquisition agreement with Cerebras rival Groq. Similar to Cerebras' approach of integrating processors and memory on a single wafer, Groq's chips also utilize an SRAM-based memory architecture, emphasizing zero-latency read speeds.
Financially, Cerebras' profitability has improved significantly in recent years. According to a filing on Friday, Cerebras reported a net profit of $87.9 million for 2025 on revenue of $510 million. Revenue grew nearly 76% compared to 2024, when the company recorded a net loss of $485 million. This marked improvement in profitability will enhance its attractiveness to investors after it goes public.
Cerebras stated that as of December 31 last year, its remaining performance obligations were $24.6 billion, with 15% of that amount expected to be recognized in 2026 and 2027, providing certainty for future revenue.
As of February this year, Cerebras announced the completion of its $1 billion Series H funding round, bringing its post-money valuation to approximately $23 billion.
In January this year, Cerebras announced plans to provide OpenAI with up to 750 megawatts of computing power by 2028 in a deal valued at over $20 billion. In exchange, OpenAI will receive Cerebras warrants and can increase its ownership stake as its spending grows. If total spending reaches $30 billion within three years, OpenAI could ultimately acquire an equity stake of approximately 10%.
In January this year, Cerebras also secured a $1 billion loan from OpenAI at a 6% annual interest rate to build data center infrastructure and provide services, as part of a broader agreement. Cerebras stated that its alliance with OpenAI is expected to generate significant projected revenue in the coming years.
Furthermore, Cerebras has also partnered with Amazon (AMZN) . According to CNBC, Cerebras signed an agreement with Amazon in March, granting Amazon the right to purchase approximately $270 million worth of Cerebras Class N (non-voting) shares. Amazon AWS will deploy Cerebras chips alongside its proprietary Trainium chips to provide higher-speed inference computing services.
Cerebras’s IPO will demonstrate to the market that chips like the WSE-3—which utilize wafer-scale architecture to break traditional size constraints and incorporate on-chip memory—possess a cost advantage in large model inference.
If Cerebras ultimately manages to provide tokens at scale for a fraction of the cost of Nvidia's services, it could force Nvidia to accelerate price cuts on the inference side or compel it to launch new products featuring similar technologies ahead of schedule, potentially altering its GPU roadmap.
Furthermore, should Cerebras maintain long-term financial health, it could lead the market to recognize that AI is not synonymous with GPUs and that dedicated inference chips have significant market potential, boosting other non-GPU chip stocks such as ASIC leader Broadcom (AVGO) and Marvell (MRVL) , as well as Arm, which features a distinctive architecture (ARM) , among others.