Amazon taps Cerebras wafer-scale chips to turbocharge AI models on AWS

Source Cryptopolitan

Amazon Web Services said Friday it will put processors from Cerebras inside its data centers under a multiyear partnership focused on AI inference.

The deal gives Amazon a new way to speed up how AI models answer prompts, write code, and handle live user requests. AWS said it will use Cerebras technology, including the Wafer-Scale Engine, for inference tasks.

The companies did not share the financial terms. The setup is planned for Amazon Bedrock inside AWS data centers, putting the partnership right inside one of Amazon’s main AI products.

AWS said the system will combine Amazon Trainium-powered servers, Cerebras CS-3 systems, and Amazon’s Elastic Fabric Adapter networking.

Later this year, AWS also plans to offer leading open-source large language models and Amazon Nova on Cerebras hardware. David Brown, vice president of Compute and ML Services at AWS, said speed is still a major problem in AI inference, especially for real-time coding help and interactive apps.

David said, “Inference is where AI delivers real value to customers, but speed remains a critical bottleneck for demanding workloads like real-time coding assistance and interactive applications.”

Amazon splits prefill and decode across separate chips

AWS said the design uses a method called inference disaggregation. That means splitting AI inference into two parts. The first part is prompt processing, also called prefill. The second part is output generation, also called decode.

AWS said the two jobs behave very differently. Prefill is parallel, compute heavy, and needs moderate memory bandwidth. Decode is serial, lighter on compute, and much more dependent on memory bandwidth. Decode also takes most of the time in these cases because every output token has to be produced one by one.

That is why AWS is assigning different hardware to each stage. Trainium will handle prefill. Cerebras CS-3 will handle decode.

AWS said low-latency, high-bandwidth EFA networking will connect both sides so the system can work as one service while each processor focuses on a separate task.

David said, “What we’re building with Cerebras solves that: by splitting the inference workload across Trainium and CS-3, and connecting them with Amazon’s Elastic Fabric Adapter, each system does what it’s best at. The result will be inference that’s an order of magnitude faster and higher performance than what’s available today.”

AWS also said the service will run on the AWS Nitro System, which is the base layer for its cloud infrastructure.

That means Cerebras CS-3 systems and Trainium-powered instances are expected to operate with the same security, isolation, and consistency that AWS customers already use.

Amazon pushes Trainium harder as Nvidia faces another threat

The announcement also gives Amazon another opening to push Trainium against chips from Nvidia, AMD, and other big chip companies. AWS describes Trainium as its in-house AI chip built for scalable performance and cost efficiency across training and inference.

AWS said two major AI labs are already committed to it. Anthropic has named AWS its primary training partner and uses Trainium to train and deploy models. OpenAI will consume 2 gigawatts of Trainium capacity through AWS infrastructure for Stateful Runtime Environment, frontier models, and other advanced workloads.

AWS added that Trainium3 has seen strong adoption since its recent release, with customers across industries committing major capacity.

Cerebras is handling the decode side of the setup. AWS said CS-3 is dedicated to decoding acceleration, which gives it more room for fast output tokens. Cerebras says CS-3 is the world’s fastest AI inference system and delivers thousands of times greater memory bandwidth than the fastest GPU.

The company said reasoning models now make up a larger share of inference work and generate more tokens per request as they work through problems. Cerebras also said OpenAI, Cognition, Mistral, and others use its systems for demanding workloads, especially agentic coding.

Andrew Feldman, founder and chief executive of Cerebras Systems, said, “Partnering with AWS to build a disaggregated inference solution will bring the fastest inference to a global customer base.”

Andrew added, “Every enterprise around the world will be able to benefit from blisteringly fast inference within their existing AWS environment.”

The deal adds more pressure on Nvidia, which in December signed a $20 billion licensing agreement with Groq and plans next week to unveil a new inference system using Groq technology.

Get seen where it counts. Advertise in Cryptopolitan Research and reach crypto’s sharpest investors and builders.

Disclaimer: For information purposes only. Past performance is not indicative of future results.
placeholder
Pi Network Price Annual Forecast: PI Heads Into a Volatile 2026 as Utility Questions Collide With Big UnlocksPi Network heads into 2026 after a 90%+ 2025 drawdown from $3.00, with 17.5 million KYC users and a smart-contract-focused Stellar v23 upgrade offering upside potential, but 1.21 billion tokens unlocking and heavy exchange deposits (437 million PI) keeping supply pressure and trust risks firmly in focus.
Author  Mitrade
Dec 19, 2025
Pi Network heads into 2026 after a 90%+ 2025 drawdown from $3.00, with 17.5 million KYC users and a smart-contract-focused Stellar v23 upgrade offering upside potential, but 1.21 billion tokens unlocking and heavy exchange deposits (437 million PI) keeping supply pressure and trust risks firmly in focus.
placeholder
Markets in 2026: Will gold, Bitcoin, and the U.S. dollar make history again? — These are how leading institutions thinkAfter a turbulent 2025, what lies ahead for commodities, forex, and cryptocurrency markets in 2026?
Author  Insights
Dec 25, 2025
After a turbulent 2025, what lies ahead for commodities, forex, and cryptocurrency markets in 2026?
placeholder
ECB Policy Outlook for 2026: What It Could Mean for the Euro’s Next MoveWith the ECB likely holding rates steady at 2.15% and the Fed potentially extending cuts into 2026, EUR/USD may test 1.20 if Eurozone growth proves resilient, but weaker growth and an ECB pivot could pull the pair back toward 1.13 and potentially 1.10.
Author  Mitrade
Dec 26, 2025
With the ECB likely holding rates steady at 2.15% and the Fed potentially extending cuts into 2026, EUR/USD may test 1.20 if Eurozone growth proves resilient, but weaker growth and an ECB pivot could pull the pair back toward 1.13 and potentially 1.10.
placeholder
My Top 5 Stock Market Predictions for 2026Five 2026 market predictions written in a native, news-style voice: AI’s winners and losers, broader sector leadership, dividend demand, valuation cooling as the Shiller CAPE sits at 39 (Dec. 31, 2025), and quantum-computing bursts—while keeping all original facts and numbers unchanged.
Author  Mitrade
Jan 06, Tue
Five 2026 market predictions written in a native, news-style voice: AI’s winners and losers, broader sector leadership, dividend demand, valuation cooling as the Shiller CAPE sits at 39 (Dec. 31, 2025), and quantum-computing bursts—while keeping all original facts and numbers unchanged.
placeholder
Gold weakens as inflation concerns lift US bond yields and USD; downside remains cushionedGold (XAU/USD) trades with a negative bias for the second consecutive day on Thursday, though it lacks follow-through selling and stalls the intraday slide near the $5,125 area.
Author  FXStreet
Mar 12, Thu
Gold (XAU/USD) trades with a negative bias for the second consecutive day on Thursday, though it lacks follow-through selling and stalls the intraday slide near the $5,125 area.
goTop
quote