Broadcom (AVGO) Q1 2026 Earnings Call Transcript

Source Motley_fool
Logo of jester cap with thought bubble.

Image source: The Motley Fool.

DATE

March 4, 2026

CALL PARTICIPANTS

  • President and Chief Executive Officer — Hock Tan
  • Chief Financial Officer — Kirsten Spears
  • Chief Operating Officer — Charlie Coaz
  • Head of Investor Relations — Ji Yoo

Need a quote from a Motley Fool analyst? Email pr@fool.com

TAKEAWAYS

  • Total Revenue -- $19.3 billion, up 29%, driven by outperformance in AI semiconductors.
  • Adjusted EBITDA -- $13.1 billion, 68% of revenue; a record level, above guidance.
  • Q2 Revenue Guidance -- $22 billion, representing expected acceleration to 47% growth.
  • Semiconductor Revenue -- $12.5 billion, up 52%, with AI semiconductor revenue at $8.4 billion, up 106%.
  • Q2 AI Semiconductor Guidance -- $10.7 billion, projecting 140% growth, driving anticipated $14.8 billion total semiconductor revenue.
  • AI Networking Revenue -- Up 60%; reached one third of AI revenue, expected to rise to 40% of total AI revenue next quarter.
  • Customer Accelerator Business -- Grew 140%, with continued momentum projected; Meta’s MTIA roadmap specifically cited as "alive and well."
  • Component Supply Secured -- "we have fully secured capacity of these components for 2026 through 2028," per management statement.
  • 2027 AI Revenue Outlook -- Management stated, "Today, in fact, we have line of sight to achieve AI revenue from chips, just chips, in excess of $100 billion in 2027."
  • Non-AI Semiconductor Revenue -- $4.1 billion, flat with previous year; next quarter guidance at $4.1 billion, up 4%.
  • Infrastructure Software Revenue -- $6.8 billion, up 1%; VMware revenue up 13%, with annual recurring revenue up 19% and total contract value bookings of $9.2 billion.
  • Q2 Infrastructure Software Guidance -- $7.2 billion, up 9%.
  • Gross Margin -- 77% consolidated; Semiconductor Solutions gross margin at approximately 68%, Infrastructure Software gross margin at 93%.
  • Operating Income -- $12.8 billion, up 31%, with operating margin at 66.4%.
  • Free Cash Flow -- $8 billion, 41% of revenue.
  • Shareholder Returns -- $3.1 billion in dividends and $7.8 billion in share repurchases; $10.9 billion returned to shareholders this quarter.
  • Additional Share Repurchase Authorization -- Board approved extra $10 billion through 2026 end.
  • Inventory Positioning -- Inventory ended at $3.0 billion, with days on hand increasing to 68 days, up from 58; attributed to preparation for AI growth.
  • Non-GAAP Tax Rate Guidance -- Projected at 16.5% for Q2 and fiscal year 2026, due to global minimum tax and income mix.

SUMMARY

Management explicitly guided for revenue and profitability acceleration in coming quarters, underpinned by material expansion in AI semiconductor deployments among six major customers. Strategic multi-year supply agreements are enabling Broadcom (NASDAQ:AVGO) to support sharply rising demand, with the company stating it has secured component capacity through 2028. Networking innovations, including industry-first Tomahawk 6 switches and 200G/400G SerDes, are capturing hyperscaler demand as AI networking grows as a mix of the AI business. High-visibility, multi-year custom silicon engagements have resulted in management projecting over $100 billion in AI chip revenue by 2027, with confirmed volume ramp milestones across Google, Anthropic, Meta, and new customer OpenAI. Infrastructure software, highlighted by a 13% VMware revenue increase and 19% annual recurring revenue growth, remains described as "not disrupted by AI" and is positioned as essential to enterprise and private-cloud generative AI environments. Management reaffirmed gross margin stability despite recent product mix changes, and committed to returning capital through newly expanded buyback authorization.

  • The CFO provided new disclosure that Q2 non-GAAP diluted share count is expected to be approximately 4.94 billion, not accounting for potential further buybacks.
  • Hock Tan said, "Our ability to assure supply in these times of constrained capacity in leading-edge wafers, in high bandwidth memory, and substrates ensures the durability of our partnerships."
  • Management debunked recent speculations, stating, "Meta's custom accelerator MTIA roadmap is alive and well. We are shipping now."
  • Visibility into customer roadmaps was characterized as "dramatically improved," with Tan stating they expect OpenAI to be deploying at over 1 gigawatt of compute in 2027.
  • AI infrastructure innovation, with direct attached copper for scale-up and Ethernet for both scale-up and scale-out, was positioned as a critical cost and performance differentiator rather than reliance on emerging optical standards.
  • Revenue guidance includes flat sequential gross margin expectations for Q2, in response to investor concerns regarding lower-margin AI rack shipments.

INDUSTRY GLOSSARY

  • XPU: A general term for custom accelerators, including but not limited to TPUs or AI ASICs, designed for both AI training and inference workloads.
  • Tomahawk Switch: Broadcom’s high-performance Ethernet switch silicon, with "Tomahawk 6" and "Tomahawk 7" representing 100 Tbps and 200 Tbps bandwidth, respectively.
  • SerDes: Serializer/Deserializer — interfaces enabling high-speed data transfers across networking infrastructure.
  • COT (Customer Owned Tooling): A business model in which customers design their own chips, leveraging a semiconductor partner only for manufacturing.
  • LLM (Large Language Model): AI models trained on extensive datasets for tasks involving natural language understanding and generation.
  • VCF (VMware Cloud Foundation): VMware’s cloud platform integrating compute, storage, and networking for private and hybrid cloud environments.
  • MTIA: Meta Training and Inference Accelerator — Meta’s custom AI accelerator initiative.
  • Direct Attached Copper: A technology for connecting devices within data center racks using copper cables, offering cost, latency, and power efficiency versus optical alternatives.
  • CPO (Co-Packaged Optics): An emerging technology integrating optical interfaces directly into semiconductor packages, not yet adopted at volume.

Full Conference Call Transcript

Hock Tan: And thank you everyone for joining us today. In our fiscal Q1 2026, total revenue reached $19.3 billion, up 29% year on year, and exceeding our guidance on the back of better than expected growth in AI semiconductors. This top line strength translated into exceptional profitability with Q1 consolidated adjusted EBITDA hitting a record $13.1 billion, which is 68% of revenue. These figures demonstrate that our scale continues to drive significant operating leverage. Now we expect this momentum to accelerate as our custom AI XPUs hit their next phase of deployment among our five customers. Looking ahead to next quarter, Q2 2026, we are guiding for consolidated revenue of approximately $22 billion, which represents 47% year on year growth.

Let me now give you more color on our semiconductor business. In Q1, revenue was a record $12.5 billion as year on year growth accelerated to 52%. This robust growth was driven by AI semiconductor revenue, which grew 106% year on year to $8.4 billion, way above our outlook. In Q2, this momentum accelerates and we expect semiconductor revenue to be $14.8 billion, up 76% year on year. Driving this is AI revenue growth, which will accelerate very sharply to 140% year on year to $10.7 billion. Now our customer accelerator business grew 140% year on year in Q1. This momentum continues in Q2. The realm of custom AI accelerators across all our five customers is progressing very well.

For Google, we continue our trajectory of growth in 2026 with strong demand for the seventh generation iNode TPU. In 2027 and beyond, we expect to see even stronger demand from next generations of TPU. For Anthropic, we are off to a very good start in 2026 for 1 gigawatt of TPU compute. And for 2027, this demand is expected to surge in excess of 3 gigawatts of compute. Our XPU franchise, I should add, extends beyond TPUs. Now contrary to recent analyst reports, Meta's custom accelerator MTIA roadmap is alive and well. We are shipping now. And in fact, for the next generation XPUs, we will scale to multiple gigawatts in 2027 and beyond.

Rounding off for customers four and five, we see strong shipments this year which we expect to more than double in 2027. We also now have a sixth customer. We expect OpenAI to be deploying in volume their first generation XPU in 2027 at over 1 gigawatt of compute capacity. Let me take a second to emphasize our collaboration with these six customers to develop AI XPUs is deep, strategic, and multiyear. We bring to the partnerships with each of them unmatched technology in service, silicon design, process technology, advanced packaging, and networking, to enable each of these customers to achieve optimal performance for their differentiated LLM workloads.

We have the track record to deliver these XPUs in high volumes at an accelerated time to market with very high yields. And beyond technology, we provide multi-year supply agreements as our customers scale up deployment of their compute infrastructure. Our ability to assure supply in these times of constrained capacity in leading-edge wafers, in high bandwidth memory, and substrates ensures the durability of our partnerships. And we have fully secured capacity of these components for 2026 through 2028. Consistent now with the strong outlook for our XPUs, demand for AI networking is accelerating. Q1 AI networking revenue grew 60% year on year and represented one third of total AI revenue.

In Q2, we project AI networking to accelerate a lot more and grow to 40% of total AI revenue. We are clearly gaining share in networking. Let me explain. In scale out, our first-to-market Tomahawk 6 switch at 100 terabits per second, as well as our 200G SerDes, are capturing demand from hyperscalers. Whether they use XPUs or GPUs this year, this lead will extend in 2027 with our next generation Tomahawk 7 featuring double the performance. Meanwhile, in scale up, as cluster sizes at our customers expand, we are uniquely positioned to enable these customers to stay on direct attached copper through our 200G SerDes.

As we next step up to 400G SerDes in 2028, our XPU customers will likely continue to stay on direct attached copper. And this is a huge advantage as the alternative of going to optical is more expensive and requires significantly more power. Reflecting the foregoing factors, our visibility in 2027 has dramatically improved. Today, in fact, we have line of sight to achieve AI revenue from chips, just chips, in excess of $100 billion in 2027. We have also secured the supply chain required to achieve this. Now, turning to non-AI semiconductors. Q1 revenue of $4.1 billion was flat year on year, in line with guidance.

Enterprise networking, broadband, and server storage revenues were up year on year, offset by seasonal decline in wireless. We forecast non-AI semiconductor revenue in Q2 to be approximately $4.1 billion, up 4% from a year ago. Let me now talk about our infrastructure software segment. Q1 infrastructure software revenue of $6.8 billion was in line with our guidance and up 1% year on year. We forecast infrastructure software revenue for Q2 to be approximately $7.2 billion, up 9% year on year. VMware revenue grew 13% year on year. Bookings continued to be strong, and total contract value booked in Q1 exceeded $9.2 billion, sustaining an annual recurring revenue growth of 19% year on year.

Let me reinforce that this growth in our software business reflects our focus and investments in foundational infrastructure. And our infrastructure software is not disrupted by AI. In fact, VMware Cloud Foundation (VCF) is the essential software layer in data centers integrating CPUs, GPUs, storage, and networking into a common high-performance private cloud environment. As the permanent abstraction layer between AI software and physical silicon, VCF cannot be disintermediated or replaced. It allows enterprises, in fact, to scale complex generative AI workloads effectively, with agility that hardware alone cannot provide. We are confident that the growth in generative and agentic AI will create the need for more VMware, not less.

So in summary, let me put it all together for Q2 2026. We expect consolidated revenue growth to accelerate to 47% year on year and reach approximately $22 billion, and we expect adjusted EBITDA to be approximately 68% of revenue. I will now turn the call over to Kirsten.

Kirsten Spears: Thank you, Hock. Let me now provide additional detail on our Q1 financial performance. Consolidated revenue was a record $19.3 billion for the quarter, up 29% from a year ago. Gross margin was 77% of revenue in the quarter. Consolidated operating expenses were $2.0 billion, of which $1.5 billion was R&D. Q1 operating income was a record $12.8 billion, up 31% from a year ago. Operating margin increased 50 basis points year over year to 66.4% on favorable operating leverage. Adjusted EBITDA of $13.1 billion, or 68% of revenue, was above our guidance of 67%. Now let us go into detail for our two segments. Starting with semiconductors.

Revenue for our Semiconductor Solutions segment was a record $12.5 billion, with growth accelerating to 52% year on year driven by AI. Semiconductor revenue represented 65% of total revenue in the quarter. Gross margin for our Semiconductor Solutions segment was up 30 basis points year on year to approximately 68%. Operating expenses of $1.1 billion reflected increased investment in R&D for leading-edge AI semiconductors and represented 8% of revenue. Semiconductor operating margin of 60% was up 260 basis points year on year, reflecting strong operating leverage. Now moving on to infrastructure software. Revenue for infrastructure software of $6.8 billion was up 1% year on year, and represented 35% of revenue.

Gross margin for infrastructure software was 93% in the quarter, and operating expenses were $979 million in the quarter. Q1 software operating margin was up 190 basis points year on year to 78%. Moving on to cash flow. Free cash flow in the quarter was $8.0 billion and represented 41% of revenue. We spent $250 million on capital expenditures. We ended the first quarter with inventory of $3.0 billion as we continue to secure components to support strong AI demand. Our days of inventory on hand were 68 days in Q1, compared to 58 days in Q4, in anticipation of accelerating AI semiconductor growth. Turning to capital allocation.

In Q1, we paid stockholders $3.1 billion of cash dividends, based on a quarterly common stock cash dividend of $0.65 per share. During the quarter, we repurchased $7.8 billion, or approximately 23 million shares, of common stock. In total in Q1, we returned $10.9 billion to shareholders through dividends and share repurchases. In Q2, we expect the non-GAAP diluted share count to be approximately 4.94 billion shares, excluding the impact of potential share repurchases. We ended the first quarter with $14.2 billion of cash. Today, we are announcing our Board of Directors has authorized an additional $10.0 billion for our share repurchase program, effective through the end of calendar year 2026. Now moving on to guidance.

Our guidance for Q2 is for consolidated revenue of $22.0 billion, up 47% year on year. We forecast semiconductor revenue of approximately $14.8 billion, up 76% year on year. Within this, we expect Q2 AI semiconductor revenue of $10.7 billion, up approximately 140% year on year. We expect infrastructure software revenue of approximately $7.2 billion, up 9% year on year. For your modeling purposes, we expect consolidated gross margin to be flat sequentially at 77%. We expect Q2 adjusted EBITDA to be approximately 68%.

We expect the non-GAAP tax rate for Q2 and fiscal year 2026 to be approximately 16.5% due to the impact of the global minimum tax and the geographic mix of income compared to that of fiscal year 2025. That concludes my prepared remarks. Operator, please open up the call for questions.

Operator: We will now open for questions. To withdraw your question, press 1 again. Due to time restraints, we ask that you please limit yourself to one question. Please stand by while we compile the Q&A roster. Our first question will come from the line of Blayne Peter Curtis with Jefferies. Your line is open.

Blayne Peter Curtis: Hey, good afternoon. Thanks for taking my question. It is just a clarification on the greater than $100 billion. I think you said AI chips. I just want to make sure you are clarifying the difference between the ASICs and networking and did not know how racks revenue fits in there. And then the question, I think the biggest overhang on the group here is that you grew roughly double in the quarter AI. I think that is what kind of cloud CapEx is growing this year. I am just kind of curious your perspective. I think given the outlook that you have for 2027, you should be a share gainer.

I am just kind of curious your perspective in terms of the pessimism that investors think of that the hyperscalers need to get a return on investment in this year or next year or the year after. I am just kind of curious, your perspective, how you factor that into your outlook.

Hock Tan: Well, what we see, what we have seen over the last few months and continue to see even more is—and it is really not so much talking about hyperscalers. Our customers, Blaine, are limited to those few players out there. And some of them are hyperscalers. Some of them are non-hyperscalers. But they all have one thing in common, which is to create LLMs, productize it, and generate platforms, be it for enterprise consumption in code assistance or agentic AI or be it for consumer subscription that we know about.

Whatever it is, it is that few prospects, and many of whom are customers now, who are creating this AI in general, whether it is generative AI, agentic AI, but creating a platform. That is our customer. And with respect to each of those guys, we are seeing far stronger and stronger demand for compute capacity for training, which is something they do need constantly. But what is very, very interesting and surprising to us is very much for inference in order to productize their latest LLMs they create and monetize it.

And that inference is driving a substantial amount of compute capacity, which is great for us because these players, these five, six customers of ours, are on the path to creating their own custom accelerators. And beyond that, they have their own design architecture of networking clusters of those custom accelerators. So I think we are going to see demand keep picking up as we have heard announcements in the past six months.

Now to clarify your first part, Blaine, when I say we forecast, we have a line of sight that our revenue in 2027 will be significantly in excess of $100 billion, I am focusing on the fact that these are pretty much all based on chips, whether they are XPUs, whether they are switch chips, DSPs—these are silicon content we are talking about.

Blayne Peter Curtis: Thanks so much.

Operator: One moment for our next question. That will come from the line of Harlan L. Sur with J.P. Morgan. Your line is open.

Harlan L. Sur: Yeah, good afternoon. Thank you for taking my question, and congratulations to the team on the strong results. Hock, there has been a lot of noise around CSPs and hyperscalers embarking on their own internal XPU, TPU design efforts. We call it COT, or customer owned tooling. This is not a new dynamic with ASICs. I think the Broadcom team has been through the COT competitive dynamic before over the thirty years that you have been a leader in the ASIC industry. And very few of these COT initiatives have ever been successful.

Now on AI, some of these COT initiatives are coming to the market now, but it looks like they are at least 2x less performance than your current generation solutions, 2x less complex in terms of chip design complexity, packaging complexity, IP. So maybe just a quick two-part question. Hock, one for you is given your visibility into next year, do you see these COT science projects taking any meaningful TPU/XPU share from Broadcom? And then maybe the second quick question for either you or is given that Broadcom's TPU/XPU programs, from a performance, complexity, IP perspective, are 12 to 18 months ahead of any of these COT programs, how does the Broadcom team widen this gap further?

Hock Tan: Well, that is a great question. And, you know, it fits into that. I purposely took the time in my opening remarks to say that when any of our—any, I guess, hyperscaler or LLM developer—tries to create, become self entirely in creating what you call a customer owned tooling or COT model, they face tremendous challenges. One is technology, which is technology as it relates to creating the silicon chips, and particularly in XPUs, that they need to do the computing and then that is needed to optimize and run the train and inference on the workloads they produce out of the LLM. That technology we talked about comes from different dimensions. You need the best silicon design team around.

You need cutting-edge, really cutting-edge SerDes, very advanced packaging, and just as much, you need to understand how to network clusters of them together. We have been doing this for more than twenty years in silicon and, in this particular space today, in generative AI. If you are trying to, as an LLM player, do your own chip, you cannot afford to have a chip that is just good enough. You need the best chip that is around because you are competing against other LLM players. And most of all, you are also competing against NVIDIA, who is by no means letting down their guard. They are producing better and better chips with every passing generation.

So you have to, as an LLM trying to establish your platform in the world, create chips that are better than—competitive with not just NVIDIA, but all the other LLM platform players that you are competing against. And for that, you really need, our belief—and we see that personally—the best partner in silicon with the best technology, IP, and execution around. And very modestly, I would say we are by far way out there. And we will not see competition in COT for many years to come. It will come eventually, but we are still a long way off because the race, which we see, continues.

And one thing I add in there that is particularly unique to us: when you create the silicon, you really have to get it up and running in high volume in production very quickly—time to market. We are very, very experienced in doing that. Anybody can design a chip in a lab that works well. Can you produce 100,000 of those chips quickly, at yields that you can afford? And we do not see too many players in the world that can do that. Charlie?

Harlan L. Sur: Thank you, Charlie.

Operator: One moment for our next question. That will come from the line of Ross Clark Seymore with Deutsche Bank. Your line is open.

Ross Clark Seymore: Hi, thanks for letting me ask a question. Hock, in your script you leaned a little bit more into the networking differentiation than you have in the past. So I guess kind of a short term and a longer term question. The short term is what is driving that up to 40% of the AI revenues? And the longer term question is, is that percentage mix in that $100 billion plus changing now? What sort of leadership do you expect to maintain in that business, whether it is scale out or scale up? And is your leadership position there helping on your XPU side as you can optimize across both the compute and the networking sides?

Hock Tan: Well, let us address the first part of that fairly complex question first, Ross. Yes. In networking, especially with the new generation of GPUs/XPUs that are coming out there, we are running 200 gigabit SerDes out there in terms of bandwidth. And the Tomahawk 6 that we introduced over six months ago—closer to nine months ago—we are the only one out there. And our customers and the hyperscalers want to run with the best networking and with the most bandwidth on their product clusters. So we are seeing huge demand for this only 100-terabit-per-second switch out there. So that is driving a lot of demand.

And couple that with running bandwidth on scaling out optical transceivers at 1.6 terabits, we are again the only player out there doing DSP at 1.6 terabits. That combination is driving, I would say, the growth of our networking components even faster than our XPUs are growing, which is already pretty remarkable. So that is what you are seeing. But at some point, I would think these things will settle down, though we are not slowing down the pace because, as I said, next year in 2027, we will launch next generation Tomahawk 7 at 2x performance and we will probably be by far the first out there, and then we will continue to sustain that momentum.

And at the end of the day, to your question, expect as a composition of our total AI revenue in any quarter that AI networking components will be ranging between probably 33% to 40%.

Ross Clark Seymore: Great. Thanks, Hock.

Hock Tan: Thanks.

Operator: One moment for our next question. That will come from the line of Christopher James Muse with Cantor Fitzgerald. Your line is open.

Christopher James Muse: Yes, good afternoon. Thank you for taking the question. I am curious, how are you thinking about the move to disaggregate prefill and decode from the GPU ecosystem and the impact to custom silicon demand? Are you seeing any potential changes in sort of the relative mix between GPUs and custom silicon?

Hock Tan: I am not sure I fully understand your question, CJ. Could you clarify what it means disaggregate?

Christopher James Muse: Sure. You know, pushing off workloads to CPUs for prefill and working off of GPUs for decode, and having that disaggregated world. And does that put any pressure in terms of the demand for custom versus going with a full GPU stack?

Hock Tan: Okay. I get what you mean. That “disaggregation” kind of threw me off. What you are really saying is how is the architecture of AI accelerators—be it GPU or XPU—evolving as workloads start to evolve. And that is what we are seeing very much in particular. The one-size-fits-all with general purpose GPU gets you only that far. It can still keep going on because you can still run different workloads. You run mixture of experts—even though you want to run mixture of experts with sparse cores to be very effective, you hear the term—but in a GPU, you have design for dense matrix multiplication.

So you do it with software kernels, but it is not as effective as if you hard-coded it in silicon and make those XPUs purposely designed to be much more performing for mixture-of-experts workloads. The same applies for inference. And what that drives down to is you start to see designs of XPUs become much more customized for particular workloads of particular LLM customers of ours. And the design starts to depart from what is the traditional standard GPU design. Which is why, as we always indicated before, XPUs will eventually be more the choice simply because it will allow flexibility in making designs that work with particular workloads—one for training even and one for inference.

And as you say, one perhaps would be better at pre-fill and one to be better at post-training or reinforcement learning or test-time scaling. You can tweak your XPUs towards a particular kind of workload LLM that you want. And we are seeing that. We are seeing that roadmap in all our five customers.

Operator: One moment for our next question. That will come from the line of Timothy Michael Arcuri with UBS. Your line is open.

Timothy Michael Arcuri: Thanks a lot. I had just a question on sort of the puts and takes on gross margin as you begin to ship these racks. I mean, obviously, it is going to pull the blended margin down, but wondering if there are any guardrails you can give us on this. Seems like the racks are maybe 45%–50% gross margin. So should we think about that pulling gross margin down like 500 basis points, roughly, as these racks begin to ship? And is there some floor to the gross margin below which you would not be willing to do more racks?

Hock Tan: I hate to tell you that you must be a bit hallucinating. Our gross margin is solidly at the number Kirsten reported. We will not be affected by the gross margin by more and more AI products going up. We have gotten our yields, we have gotten our cost to the point where the model we have in AI will be fairly consistent with the models we have in the rest of the semiconductor business.

Kirsten Spears: I would agree with that. I think on further study, relative to even comments that I did make last quarter, the impact relative to our overall mix is actually not going to be substantial at all. So I would not worry about it.

Hock Tan: Oh, okay.

Timothy Michael Arcuri: Thank you so much.

Operator: One moment for our next question. That will come from the line of Stacy Aaron Rasgon with Bernstein. Your line is open.

Stacy Aaron Rasgon: Hi, guys. Thanks for taking my question. I do not know if this is for Hock or Kirsten, but I wanted to dig in a little more to this substantially more than $100 billion next year. I am trying to just count up the gigawatts. I counted, I do not know, eight or nine. You have three from Anthropic, one from OpenAI, so that is four. You said Meta was multiple, so at least two. That gets you to six. Google, I figured, should be bigger than Meta, so at least three. You know, that is nine. And then you have a few others.

I thought that your content per gigawatt was sort of, you know, call it in the $20 billion per gigawatt range. I guess what I am asking is my math around the gigawatts you plan to ship in 2027 correct? And do I think about your content per gigawatt as that ships? Maybe we will be, quote unquote, substantially more than $100 billion.

Hock Tan: Stacy, you have a very interesting perspective, and I have to admire you for that. But you are right. You can look at it at gigawatts, which is the right way to look at it instead of dollars because that is how we sell our chips. You have to realize, depending on our LLM customer—our six customers now, sorry, not five, six—the dollars per gigawatt varies, sometimes quite dramatically. It does vary. But you are right, it is not far from the dollars you are talking about. And if you look at it by gigawatt in 2027, we are seeing it getting close to 10.

Stacy Aaron Rasgon: Got it. That is very helpful. Thank you.

Hock Tan: Sure.

Operator: Our next question will come from the line of Benjamin Alexander Reitzes with Melius Research. Your line is open.

Benjamin Alexander Reitzes: Hey. Thanks. Hock, great to be speaking with you. Wanted to ask you about your commentary about supply visibility on those four major components through 2028. You know, how did you do it? You are probably the first one to go out through the 2028 time frame. And secondly, after this astounding growth in 2027 for your AI business, do you have enough visibility to grow quite a bit in 2028, based on the supply that you see and that kind of commentary? Thanks a lot.

Hock Tan: The best answer is, yeah, you are right. We anticipate this sharp accelerated growth—none, no big goodness—anticipate that the rate of growth is showing, but we kind of anticipate a large part of it, I guess, all for the longer than six months. We were early in being able to lock up T-glass. For infamous T-glass you all heard about, we were very early. We have locked up substrates. We have worked with our good partners on the rest of the stuff we talked about. And so the answer to your question is, some anticipation early and the fact that we have very good partners out there in these key components. What else can I say except that yes?

Charlie, you want to add anything?

Charlie Coaz: Yeah, just maybe a couple of quick ones. I think you covered that piece really well. I think then the other piece that is really important, as Hock said, we build custom silicon for six customers. We have very deep strategic multiyear engagement with them. They share with us, because of this custom capability, exactly what they anticipate at least over the next two to three years, sometimes four years. And so because of that, that is exactly why we went and secured all the elements Hock talked about. And when we secure this, it requires investments with these partners, sometimes developing not just more capacity, but the right technology and capacity for that.

So we have to go secure it for multiple years. And you are probably right. We are probably the first one to secure that up to 2028. Or beyond.

Benjamin Alexander Reitzes: And can you grow in 2028 with what you see in supply? Sorry to sneak that in.

Hock Tan: Yes.

Benjamin Alexander Reitzes: Thank you.

Operator: Thank you. Our next question will come from the line of Vivek Arya with Bank of America Securities.

Vivek Arya: Thanks for taking my question. Hock, I just wanted to first clarify the Anthropic project you are doing, the $20 billion or so for a gigawatt this year—how much of that is chips, and how much of that is rack? I just wanted to understand, when you say $100 billion in chips, is there a distinction between chips versus your rack-scale projects? Because just that project is supposed to triple next year. And then my question is, your AI business is transitioning from one large customer that was where you had exclusive partnership to now multiple customers who are using multiple suppliers.

So how do you get the visibility and the confidence about how your share will progress at these multiple customers because it is a very fragmented set of engagements that they have across a whole range of cloud service providers? What are you doing to ensure that you have solid visibility and the right market share at this fragmented set of customers who are using multiple suppliers?

Hock Tan: Vivek, you have to understand one thing. First, as Charlie correctly put down very nicely, we only have very few customers—to be precise, six. For the volume we are driving, the revenue we are driving, we only have just six. Prior to that, even less recently. And number two, you also have to understand, with the dollars each of them spend, and the criticality of the nature of what they are embarking on—and that is why I threw out this term. Meta has MTIA. That is the customer accelerator program. To them, as to every one of my customers in this space, it is a strategic play. It is not optionality. To them—long term, short term, medium term—it is strategic.

Extremely strategic. They do not stop, and they are very clear, each of them, on where they want to position this custom silicon within the trajectory of their LLM development and the trajectory of how they develop inference for productizing those LLMs. That part, we have very clear visibility. Anything else on GPU using cloud—hyperscaler using cloud business—these are all transactional and optionality. You point out very correctly, it seems very confusing. Trust me, not for us, nor for those customers we have. They are very strategic. They are very targeted, and they know exactly what they are building up and how much capacity they want to build up each year.

And the only thing they think about is can you do it faster? Otherwise, it is very strategic and targeted on a projected roadmap. Anything else you see in the mix is pure—what I call—opportunistic for these guys. The optionality. So it is very clear.

Vivek Arya: And on the clarification, Hock, Anthropic racks versus chips? Thank you.

Hock Tan: I would rather not answer that, but we are okay. As Kirsten said, we are good on our dollars and margin.

Vivek Arya: Thank you.

Operator: Thank you. Our next question will come from the line of Tom O'Malley with Barclays. Your line is open.

Tom O'Malley: Hey, guys. Thanks for taking my questions. I have one for Hock and one for Charlie. I know you are very specific and particular about what you put in the preamble, and you noted that customers are staying at direct attached copper through 400G SerDes. Is there any reason you are pointing that out in particular, especially as a leading pioneer in CPO? And then on Charlie's side, as you are adding more customers here, I would imagine customers with design ASICs with you are going to use scale-up Ethernet. Maybe talk about scale-up protocols and how you see Ethernet developing there as well.

Hock Tan: Okay. No. I am just highlighting the fact that we do—on networking—our technology is really very, very uniquely positioned to help our customers, and more than our customers, even customers using general purpose GPUs, not just XPUs. If you are trying to create LLMs, and creating your own AI data centers and redesigning it—architecting it—you truly want larger and larger domains or clusters, and you really want to connect XPU to XPU directly where you can. And the best way to do that is to use direct attached copper. That is the lowest latency, lowest power, and lowest cost. So you want to keep doing that, especially in scale up, as long as possible.

In scaling out, we are fine with optical. But I am talking about scaling up in a rack—a cluster domain—you really want to use direct attached copper as long as you can. And we are still, based on our technology that Broadcom has, especially on connecting XPU to XPU or even GPU to GPU, able to do it with copper. And we can push the envelope from 100G to 200G to even to 400G. We have SerDes now running 400G that can drive distance on a rack to run copper. All I am trying to say is you do not need to go run into some bright shiny object called CPO.

Even as we are the lead in CPO, CPOs will come in its time. Not this year, maybe not next year. But in its time. Charlene?

Charlie Coaz: Yeah, well said, Hock. And on the question of Ethernet, with the debut of the cloud, Ethernet became the de facto standard in every cloud for the last two decades. If you look at the debut of the back-end networks, as Hock articulated, there was two years ago a big fight about what protocol should be used to achieve the latency and the scale necessary on scale up. And the industry at the time, twenty-four months ago, was not clear. We were clear. We were very clear actually about what the answer should be.

And because of the deep engagements with our partners, they made it very clear to all of us and the industry—GPU or XPU—that Ethernet is the scale-out choice. Check mark. Today, everyone is talking about scaling out with Ethernet. Now when it comes to scale up, yes, exactly like what happened three, four years ago, on scale up now. What is the right answer for this? And what we are hearing consistently and what we are seeing is the right answer is Ethernet. And as you know, last year, we announced with multiple hyperscalers, and many of our peers in the semiconductor industry, that Ethernet scale up is the right choice. That is what we believe will happen.

Time will tell. But a lot of the XPU designs we are doing, we are being asked to scale up through Ethernet, and we are happy to enable that.

Hock Tan: Thank you, both.

Operator: Thank you. Our next question will come from the line of Jim Schneider with Goldman Sachs. Your line is open.

Jim Schneider: Good afternoon and thanks for taking my question. Hock, it was helpful to hear you discuss the progress of your other full custom XPU engagements outside of TPUs. As we look into next year, is it fair to assume that those are mostly targeting inference applications or not? And then could you maybe qualitatively speak to either the performance or cost advantages relative to GPUs that is giving those customers the ability to forecast in such large scale. Thank you.

Hock Tan: Thanks. Most of our customers begin with inference, simply because that tends to be the easiest path to start on—not necessarily from anything else than the fact that when you do inference, it is less compute. But also, then the question is, do you need these general purpose, massive, dense matrix multiplication GPUs when you can do it more efficiently, effectively, with custom inference silicon—XPUs—that do the job better or just as well at much cheaper cost and lower power. And that is what we find these customers starting with. But they are now in training and many of our XPUs are used both in training as well as inference.

And, by the way, they are interchangeable—just as a GPU can be used not just for training, which they are perhaps more perfectly suited to, but they can be used for inference—what we are seeing is our XPUs are used for both. And we are seeing that going on. We are also seeing very rapidly more, for those customers who are much more matured in the progression I talked about in their journey towards complete XPU, that they will start to develop two chips each year, simultaneously—one for training, one for inference—to be specialized. Why?

Because what we are seeing very clearly for these LLM players is you do the training to achieve a higher level of intelligence, smarts, for your LLM. So great, you get yourself a great LLM—state of the art or more. Now you have to productize it, which means inference. Well, you can then decide at that time you have your model going as the best, because if you decide then to do your inference productization, it will take you a year at least to productize. At which time, somebody else is going to create an LLM better than yours.

So there is a leap of faith here that when you do training to create the next level of superintelligence in your LLM, you have to be investing simultaneously in inference, both in terms of the chip and the capacity. So our visibility is really coming out better and better as we find those six customers get more mature in their progression towards better and better LLMs. So, yeah, that is the trend we are seeing. It is not happening to all our six customers yet, but we are seeing a majority of them headed in that way right now.

Operator: Thank you. One moment for our next question. That will come from the line of Joshua Louis Buchalter with TD Cowen. Your line is open.

Joshua Louis Buchalter: Hey, guys. Thanks for taking my question, and congrats on the results. Appreciate all the details on the expectations for deployment at specific customers. Hoping you could just maybe reflect on how visibility has changed over the last one to two quarters that gave you confidence to give us more details. Then on a specific one, you mentioned greater than a gigawatt for OpenAI in 2027. With that deal being for 10 gigawatts through 2029, that implies a pretty sharp inflection, I guess, in 2028. Is that the right way to think about it, and was that sort of always the plan? Thank you.

Hock Tan: Yes. As you all have seen and you all know, in this generative AI race that we are in now—and I should not use the word race; let us call it progression—among the few players we see here. I mean, it is a competition. Each is trying to create an LLM better than the other and more tailored for specific purpose, be they enterprise, be they consumer, be they search. Each one is trying to create it more and more. And all of that requires not just training, which is important to keep improving your LLM models, but inference for productization and monetization of your LLM.

And we are getting—call it the fact that we have been engaged with some of them now for more than a couple of years—we are getting better and better visibility as they have more and more confidence that the XPUs they are working on with us are achieving what they are getting at. As they get a sense that the XPUs they are working on, with the software, with the algorithm they need, are giving them more confidence that this XPU silicon is what they need. And as it works and gets better and better, we get more visibility, as Charlie put it perfectly. Because, at the end of the day, we only have six guys to work on.

And these six guys all, as I said, look at XPUs and AI in a very strategic manner. They do not think one generation at a time. They think multiple generations, multiple years. And in spite of all the few-breeze noise out there on what is available, they think very long term on how they deploy the experience they develop with us, how they deploy in achieving better and better LLMs that they want to create, and more than that, how to deploy in monetizing. So we are part of their strategic roadmap.

We are not in just optionality of, oh, shall I use a GPU, shall I use it in the cloud because I need to train for six months. No. This is more than that. The investments these guys are making are long term. And it is great to be part of that long-term roadmap, as opposed to a transactional roadmap and the noise, as I answered an earlier question, that makes up short term transactions with what is long term strategic positioning of our business and our product. And to sum it all, I think our business in XPUs is a strategic sustainable play for all the six customers we have today.

Joshua Louis Buchalter: Thank you.

Operator: Thank you. That is all the time we have for Q&A today. I would now like to turn the call back over to Ji Yoo for any closing remarks.

Ji Yoo: Thank you, Sherry. Broadcom currently plans to report its earnings for 2026 after the close of market on Wednesday, June 3, 2026. A public webcast of Broadcom's earnings conference call will follow at 2:00 PM Pacific Time. That will conclude our earnings call today. Thank you all for joining. Sherry, you may end the call.

Operator: This concludes today's program. Thank you all for participating. You may now disconnect.

Should you buy stock in Broadcom right now?

Before you buy stock in Broadcom, consider this:

The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Broadcom wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.

Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you’d have $526,889!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you’d have $1,103,743!*

Now, it’s worth noting Stock Advisor’s total average return is 947% — a market-crushing outperformance compared to 192% for the S&P 500. Don't miss the latest top 10 list, available with Stock Advisor, and join an investing community built by individual investors for individual investors.

See the 10 stocks »

*Stock Advisor returns as of March 4, 2026.

This article is a transcript of this conference call produced for The Motley Fool. While we strive for our Foolish Best, there may be errors, omissions, or inaccuracies in this transcript. Parts of this article were created using Large Language Models (LLMs) based on The Motley Fool's insights and investing approach. It has been reviewed by our AI quality control systems. Since LLMs cannot (currently) own stocks, it has no positions in any of the stocks mentioned. As with all our articles, The Motley Fool does not assume any responsibility for your use of this content, and we strongly encourage you to do your own research, including listening to the call yourself and reading the company's SEC filings. Please see our Terms and Conditions for additional details, including our Obligatory Capitalized Disclaimers of Liability.

The Motley Fool recommends Broadcom. The Motley Fool has a disclosure policy.

Disclaimer: For information purposes only. Past performance is not indicative of future results.
placeholder
Ethereum Price Prediction: What To Expect From ETH In March 2026The Ethereum price enters March after a brutal February that delivered close to 20% losses. ETH has now posted six consecutive red months starting from September 2025, a streak unprecedented in the to
Author  Beincrypto
Mar 03, Tue
The Ethereum price enters March after a brutal February that delivered close to 20% losses. ETH has now posted six consecutive red months starting from September 2025, a streak unprecedented in the to
placeholder
Solana Sell Pressure Builds as Exchange Inflows Rise—$77 Is the LineSolana (SOL) has been facing a period of consolidation, with its price fluctuating between $87 and $77 in recent weeks. However, recent developments in the market suggest that the cryptocurrency could
Author  Beincrypto
Yesterday 01: 59
Solana (SOL) has been facing a period of consolidation, with its price fluctuating between $87 and $77 in recent weeks. However, recent developments in the market suggest that the cryptocurrency could
placeholder
Bitcoin’s Second-Largest Corporate Holder Just Changed the Rules: Is MicroStrategy Next?MARA Holdings has formally rewritten its Bitcoin playbook, expanding its treasury policy to permit sales of Bitcoin held directly on its balance sheet.It raises questions about whether Strategy (Micro
Author  Beincrypto
Yesterday 02: 01
MARA Holdings has formally rewritten its Bitcoin playbook, expanding its treasury policy to permit sales of Bitcoin held directly on its balance sheet.It raises questions about whether Strategy (Micro
placeholder
How Trump’s Escalation With Iran Could Become the Catalyst for Declining Political SupportIsrael and the United States have launched a joint attack on Iran, one that has an unclear expiry date and that has already caused reverberations across the rest of the Middle East. Though Israel’s in
Author  Beincrypto
Yesterday 02: 01
Israel and the United States have launched a joint attack on Iran, one that has an unclear expiry date and that has already caused reverberations across the rest of the Middle East. Though Israel’s in
placeholder
Chainlink connects $5B cbBTC to Monad via CCIP, expanding cross-chain Bitcoin liquidity accessChainlink expanded its cross-chain infrastructure after integrating Coinbase’s wrapped Bitcoin token, cbBTC, with the Monad blockchain through its Cross-Chain Interoperability Protocol (CCIP).  The connection enables more than $5 billion in cbBTC supply to be accessible to decentralized finance (DeFi) applications operating on Monad. The move strengthens Chainlink’s position in cross-chain and institutional infrastructure. cbBTC goes […]
Author  Cryptopolitan
Yesterday 02: 03
Chainlink expanded its cross-chain infrastructure after integrating Coinbase’s wrapped Bitcoin token, cbBTC, with the Monad blockchain through its Cross-Chain Interoperability Protocol (CCIP).  The connection enables more than $5 billion in cbBTC supply to be accessible to decentralized finance (DeFi) applications operating on Monad. The move strengthens Chainlink’s position in cross-chain and institutional infrastructure. cbBTC goes […]
goTop
quote