AVGO or AMD: Can These AI Chip Challengers Outrun Nvidia by 2030?

Source Tradingkey

TradingKey - While the AI revolution has crowned Nvidia (NVDA) with total market domination in the area of training hardware, there are now signs of a shift in the ecosystem. 

The company's GPUs have captured more than 90% of the overall GPU market, thanks in large part to the combination of its powerful chips and sophisticated CUDA software stack, creating a strong hardware-software coupling that serves as the foundation of almost every AI model's operation, therefore creating a strong lock-in scenario for Nvidia. 

However, as the workload continues to shift from training to inference, which is the second and, for most models, the most expensive step in operating an AI model, factors such as cost and efficiency (as opposed to just peak throughput) will become increasingly important to AI business and technology users alike.

Given that over the next five years, inference is expected to represent the largest segment of the AI market, the growth of companies that provide a wide range of technology solutions will become more competitive in all AI technology markets. 

Although Nvidia will remain a dominant market leader, its large size may present opportunities for other smaller AI market leaders to exceed Nvidia in revenue growth in the year 2030 and beyond.

Why Inference Reshapes the AI Hardware Race

The difference between training and inference has a big impact on how much it costs to run a model at hyperscale since training occurs only once and requires lots of computing resources, but inference happens all the time in production environments. 

Operators are particularly interested in the total cost of ownership, energy consumption, and latency when making decisions about how to deploy a model that will process every query from users or other applications using that model. 

The shift in budget towards serving models at scale will continue to broaden chip options to include those that are custom-designed to do specific things efficiently and use less power each time they produce tokens.

Broadcom’s ASIC Advantage in Scaling AI Solutions

Broadcom (AVGO) is an important, lesser-known contributor to the AI industry; it manufactures application-specific integrated circuits (ASICs). 

These special-purpose chips are less versatile than more common GPUs, but when configured specifically for one task – for instance, for inference – they can work significantly faster and be more power efficient. 

As hyperscalers such as Amazon and Google are very focused on efficiency, every millisecond and every watt equate to money.

Broadcom previously demonstrated its ability to manufacture ASICs by building tensor processing units for Alphabet, which are used to run many of the AI workloads on Google Cloud, both for training and inference. 

These successes attracted the interest of two other companies (Meta Platforms and ByteDance), creating what Broadcom estimates to be a total combined market opportunity with these three companies of $60 billion to $90 billion during fiscal 2027. If the company can obtain a substantial amount of this money, AVGO is likely to outperform Nvidia stock during the next several years.

The new partnership announced by OpenAI and Broadcom in October 2025 took their initial plans to another level. Both companies are developing a custom AI accelerator solution to ultimately support OpenAI/its customers with advanced technology that will help improve AI models in relation to additional performance and overall intelligence. 

This includes integrating chip design and system design by placing them both under one roof (inside of OpenAI) so that the most current model advancements can continue to be utilized on the hardware to create even higher-performing AI. Together, they established a long-term agreement for future development/production as well as to support deployments that consist of racks containing multiple AI acceleration processors integrated with Broadcom's other network-based technologies (Ethernet) that can support global demand growth for high-performance AI.

Broadcom is also making strides in wireless infrastructure that will support the explosion of data being generated from AI. The company introduced its BroadPeak RF Digital Front End System-on-Chip in February 2026, utilizing advanced 5-nanometer CMOS. 

The SoC combines best-in-class DFE with integrated ADC/DAC, which allows it to significantly reduce power consumption (up to 40%) compared to traditional massive MIMO and remote radio head solutions. Additionally, BroadPeak's operational frequency range of 400 MHz to 8.5 GHz makes it the first product in the industry to meet the compliance requirements of the forthcoming 5G Advanced and 6G standards, providing a solution for both massive MIMO and RRH deployments.

This is important, as massive MIMO is one of the primary enablers of 5G technology and will enhance coverage, capacity, and throughput. With the continued growth in mobile data from AI-driven applications, the chip can help operators and device manufacturers build next-generation, high-capacity networks that enable more personalized, AI-heavy user experiences by aligning with the anticipated 5G-A bands of 6.425-7.125 GHz and 6G mid-bands of 7-8.5 GHz. 

As the demand for high-volume, low-latency networks grows in tandem with the expanding size and scope of data centers, Broadcom’s extensive expertise in custom accelerators, as well as long-standing relationships with leading telecommunications equipment manufacturers, will provide AVGO with a differentiated advantage during the period of inference.

AMD's Strategy for Gaining Market Share Through Efficiency

Although AMD (AMD) has long been the 2nd largest producer of Graphic Processing Units (GPUs), only behind Nvidia, there has been a significant opportunity for AMD to increase its market share as a result of the growth and versatility of inference. 

The reason that the inference market is a viable target is that inference jobs run continuously, so the efficiency of executing each command and how much energy each execution consumes is often more important than the ability to execute very high-volume commands (maximum performance output). Therefore, this represents where AMD has directed its efforts.

AMD’s software stack ROCm has undergone major improvements over the last few years, and can now effectively handle many fewer demands in regards to the workload and the size of the data set than CUDA; thus, by way of example, many businesses have made the choice to use ROCm 7 (AMD’s latest version of ROCm) to run their production workloads since cost and being energy efficient will lead those businesses to be able to improve the profitability of the inference workloads that they run. 

Currently, one major AI company is using more than 60% of its processing workloads on AMD GPUs, and by way of example, 7 of the top 10 companies in the AI sector are using at least one AMD product within their production environments.

 Based on the size of revenues that AMD has relative to Nvidia, even a small percentage of growth in the inference market would lead to AMD experiencing a significant amount of revenue growth over the next 5 years.

The AMD team is working hard to create an open ecosystem that can offer customers more choices and flexibility when building their systems by reducing the constraints caused by vendor lock-in. In addition to creating new solutions that can provide customers with more options, AMD is also creating a more open way to support existing customers through its partnership with Broadcom to form the UALink Consortium (a consortium that is exploring an open alternative to Nvidia's exclusive NVLink technology). If UALink becomes a standard, it will give AMD an advantage in its strategy to succeed in the data center market.

In 2025, AMD generated record revenue of $34.6 billion dollars for the year, with gross margins of 50 percent. This continued financial strength, along with the growing business AMD has in inference, plus the improved software stack that they are building, all support the belief that AMD stock will outperform if AMD continues to execute as planned.

Is Broadcom or AMD a Long-term Buy?

Broadcom's possibilities for growth are continuing to increase as hyperscale cloud provider datacenters will continue purchasing custom-designed application-specific integrated circuits (ASICs), and currently, Alphabet accounts for a significant amount of Broadcom's ASIC revenue. The anticipated agreement between OpenAI and Broadcom that is expected in Q4 2027 will provide another multi-year growth opportunity. There are potential risks with this investment, as it is reported that Alphabet is partnering with MediaTek to develop a variant of its Tensor Processing Unit (TPU). 

There is likely to be at least a 2+ year period in which it is very unlikely that Alphabet changes to this new silicon, but investors should monitor this uncertainty closely. Broadcom is currently trading at approximately 39x forward P/E and is therefore considered expensive; however, the stock remains in a position of explosive revenue growth. With Broadcom currently having established ASIC designs and infrastructure, it should not be difficult to map out an approximate $5 billion to $10 billion in annual revenue for 3+ years. 

Given these risks, the current pullback in stocks appears to be a very good entry point to buy shares before the beginning of 2023. Goldman Sachs has a rating of Buy with a $450 price target for Broadcom.

AMD's biggest challenge is actually its price-to-earnings and enterprise value-to-EBITDA (EV/EBITDA) profile. Such an elevated profile requires a near-flawless execution of a very aggressive, long-term growth strategy over the next several years. 

The principal risk for AMD, at this point, is not actually its ability to cover its debts, as its balance sheet is relatively strong, but instead, it faces a very high volatility risk level and the potential for a valuation reset if either the rate of growth slows or the overall market continues to wobble. 

Nevertheless, AMD is a high-quality competitor in a strategically important space and is trading below its recent peak prices primarily due to sentiment. Long-term investors can feel comfortable with a sound revenue growth trend, an improving level of profitability, and a well-capitalized balance sheet serving as a solid foundation for long-term performance. 

Long-term investors with the ability to tolerate higher-risk investments over a longer-term holding period may find today’s technical softness provides an opportune time to begin building a position in AMD stock, as they are likely to be able to buy into this stock when it is experiencing high levels of volatility.

Disclaimer: For information purposes only. Past performance is not indicative of future results.
placeholder
Seesaw Effect Continues. US Pre-Market Three Major Index Futures Weaken, Oil Prices Rise, Bitcoin Drops Below 68,000 MarkAgainst a backdrop of intertwined geopolitical risks and macroeconomic uncertainty, global market sentiment has repeatedly diverged. In Friday pre-market trading ET, the three major U.S.
Author  TradingKey
13 hours ago
Against a backdrop of intertwined geopolitical risks and macroeconomic uncertainty, global market sentiment has repeatedly diverged. In Friday pre-market trading ET, the three major U.S.
placeholder
Australian Dollar falls to two-month lows on US–Iran peace uncertaintyAUD/USD extends its losing streak for the fourth consecutive day, trading around 0.6880 during the Asian hours on Friday.
Author  FXStreet
22 hours ago
AUD/USD extends its losing streak for the fourth consecutive day, trading around 0.6880 during the Asian hours on Friday.
placeholder
US-Iran Rift Persists, Will Gold Rise or Fall Next?US-Iran tensions persist; $4,400 becomes the gold ( XAUUSD) bulls' make-or-break level.During the European session on March 26, as of press time, spot gold retreated 1.5% to $4,436.42 per
Author  TradingKey
Yesterday 10: 21
US-Iran tensions persist; $4,400 becomes the gold ( XAUUSD) bulls' make-or-break level.During the European session on March 26, as of press time, spot gold retreated 1.5% to $4,436.42 per
placeholder
Gold rallies on hopes for US-Iran talks and falling US Treasury yieldsGold price (XAU/USD) gains nearly 2% on Wednesday as Oil futures prices tumbled amid growing speculation that the US and Iran would begin talks to end the conflict that started nearly four weeks ago. At the time of writing, XAU/USD trades at $4,556.
Author  FXStreet
Yesterday 01: 33
Gold price (XAU/USD) gains nearly 2% on Wednesday as Oil futures prices tumbled amid growing speculation that the US and Iran would begin talks to end the conflict that started nearly four weeks ago. At the time of writing, XAU/USD trades at $4,556.
placeholder
Gold Prices Under Pressure After Hitting $4,600, UBS: Safe-Haven Logic Unchanged But Only Delayed.Impacted by signs of easing geopolitical risks in the Middle East, international gold prices (XAUUSD) rebounded sharply after previously falling to the $4,100 level, at one point climbing
Author  TradingKey
Mar 25, Wed
Impacted by signs of easing geopolitical risks in the Middle East, international gold prices (XAUUSD) rebounded sharply after previously falling to the $4,100 level, at one point climbing
goTop
quote