Broadcom’s chip division rolled out a new networking processor on Tuesday, designed to turbocharge artificial intelligence workloads that demand tight coordination among hundreds of compute units.
The announcement marks another salvo in its tussle with AI chip-making leader, Nvidia. This latest Broadcom device, known as Tomahawk Ultra, serves as a traffic controller, shuttling vast volumes of data between dozens or even hundreds of silicon chips housed within a single server rack.
AI training and inference rely on “scale‑up” computing, where chips are clustered closely to share data at blistering speeds. Until now, Nvidia’s NVLink Switch reigned supreme for that task, but Broadcom claims its newcomer can link up to four times more processors in a single network.
Rather than leaning on a proprietary interconnect, the Ultra leverages a turbo‑charged version of Ethernet, beefed up for low latency and high throughput. Ram Velaga, Broadcom’s senior vice president and general manager of Broadcom’s Core Switching Group, told reporters that the chip can manage communications among far more units than Nvidia’s rival product, all while using a broadly supported protocol.
“Tomahawk Ultra is a testament to innovation, involving a multi-year effort by hundreds of engineers who reimagined every aspect of the Ethernet switch.”
Velaga.
“This highlights Broadcom’s commitment to invest in advancing Ethernet for high-performance networking and AI scale-up,” added Velaga.
Broadcom has already been supplying chip‑making services to customers like Google, helping the search giant assemble its own AI accelerators as an alternative to Nvidia GPUs.
With Tomahawk Ultra now shipping, the company is hoping to further erode Nvidia’s dominance by offering data center architects a switch that scales to larger clusters at similar, or better, speeds.
The processors will be fabricated by Taiwan Semiconductor Manufacturing Co. using its five‑nanometer node, the same advanced process behind many of the world’s fastest chips. Velaga noted that Broadcom’s engineering teams spent roughly three years in development, originally targeting high‑performance computing markets before pivoting to the booming generative AI sector.
“AI and HPC workloads are converging into tightly coupled accelerator clusters that demand supercomputer-class latency, critical for inference, reliability, and in-network intelligence from the fabric itself,” said Kunjan Sobhani, lead semiconductor analyst, Bloomberg Intelligence.
“Demonstrating that open-standards Ethernet can now deliver sub-microsecond switching, lossless transport, and on-chip collectives marks a pivotal step toward meeting those demands of an AI scale-up stack, projected to be double digit billions in a few years.”
Sobhani.
In traditional scale‑out setups, servers are spread across racks and linked via standard networks, which adds latency. Scale‑up, by contrast, keeps compute elements within a narrow physical footprint, often a single rack, so bits bounce back and forth in microseconds. That kind of speed is vital when training massive neural nets or running real‑time inference.
As AI models grow ever larger, the race is on to design infrastructure that can handle exabytes of parameters while staying cost‑effective and power-efficient. By adopting Ethernet, a well‑understood, open standard, and pushing its performance envelope, Broadcom hopes to offer data‑center operators an easier path to expanding their AI farms without being locked into one vendor’s ecosystem.
With Tomahawk Ultra now in customers’ hands, the contest over who supplies the world’s AI engines is entering a new, more crowded phase, one where openness and scale could tip the balance just as much as raw chip horsepower. This also sets Broadcom on the path to becoming a force to be reckoned with in the AI industry.
The company was in Q1 tipped to become one of the major firms to own a stake and operate Intel’s factories together with TSMC, Nvidia, and AMD, in a deal reportedly proposed by TSMC.
KEY Difference Wire helps crypto brands break through and dominate headlines fast