Qualcomm stock shot up by 23% on Monday, after the company said it’s launching new AI accelerator chips to take on Nvidia and AMD in the most expensive chip war to date.
The announcement, made on October 27, was the company’s loudest statement yet that it’s entering the data center arms race.
The two new chips (AI200, set for release in 2026, and AI250, coming in 2027) won’t be in smartphones. They’ll be powering entire liquid-cooled racks inside massive AI server farms.
According to CNBC, these new chips are a major leap away from Qualcomm’s usual comfort zone of mobile and wireless devices. Both accelerators can fill a full rack like Nvidia’s and AMD’s current systems, which let 72 chips operate as one.
The idea is to give AI labs and hyperscalers the horsepower they need to run massive AI models, without needing Nvidia’s supply chain or AMD’s second-place position.
The AI200 and AI250 are built using the same tech inside Qualcomm’s phone chips, called Hexagon neural processing units (NPUs).
Durga Malladi, the company’s general manager for data center and edge, told reporters last week: “We first wanted to prove ourselves in other domains, and once we built our strength over there, it was pretty easy for us to go up a notch into the data center level.”
These racks are built for inference, not training. That means Qualcomm isn’t trying to build chips that help train models like OpenAI’s GPTs, which were trained on Nvidia GPUs.
Instead, the focus is on running those models faster and cheaper once they’re trained. That’s where most real-world workloads actually happen.
And there’s money here… real money. McKinsey says the world will spend $6.7 trillion on data centers by 2030, and most of that will go to AI hardware. Nvidia controls more than 90% of that market today and is sitting on a market cap of over $4.5 trillion. But customers are getting restless.
OpenAI recently said it’s buying chips from AMD and might even buy a piece of the company. Google, Amazon, and Microsoft are all designing their own AI accelerators. Everyone wants an option that doesn’t involve waiting in line behind a dozen other AI labs just to get a GPU shipment from Nvidia.
Malladi said the racks draw around 160 kilowatts, which matches the power usage of Nvidia racks. But Qualcomm claims its systems are cheaper to run, especially for cloud service providers.
The company will also sell parts separately, giving clients the freedom to build custom racks. “What we have tried to do is make sure that our customers are in a position to either take all of it or say, ‘I’m going to mix and match,’” Malladi added.
Even Nvidia and AMD could end up buying parts of Qualcomm’s stack. That includes its central processing units (CPUs), which Malladi said will be available as standalone components. The full pricing for chips, cards, and racks hasn’t been disclosed. Qualcomm didn’t confirm how many NPUs can fit in a rack either.
Earlier this year, Qualcomm signed a deal with Saudi Arabia’s Humain, which plans to install Qualcomm inferencing chips across data centers using up to 200 megawatts of power. That deal made Humain one of the first major customers for the rack-scale systems.
The company also said its AI cards handle 768 gigabytes of memory, which is more than what Nvidia or AMD currently offer. It also claimed better efficiency in power and cost of ownership, though it didn’t provide exact figures.
Claim your free seat in an exclusive crypto trading community - limited to 1,000 members.