Image source: The Motley Fool.
Thursday, February 12, 2026 at 4:30 p.m. ET
Need a quote from a Motley Fool analyst? Email pr@fool.com
Arista Networks, Inc. (NYSE:ANET) delivered record annual and quarterly revenues, strengthened by continued high growth in innovative AI and cloud networking segments. Management raised full-year 2026 revenue guidance to 25% growth, setting an $11.25 billion target and increasing the AI networking revenue goal to $3.25 billion, reflecting accelerating customer adoption in both traditional and emerging cloud sectors. Significant operational investments, such as the 7800R4 spine launch and expansion of the cognitive campus and branch portfolio, positioned the company for diversified customer growth, though supply-related cost headwinds, particularly in memory and silicon procurement, persisted. Capital returns to shareholders increased with common stock repurchases, and a larger customer base was supported by the integration of CloudVision and added strategic partnerships across the AI ecosystem.
Jayshree Ullal, Arista chairperson and chief executive officer, and Chantelle Breithaupt, Arista's chief financial officer. This afternoon, Arista Networks, Inc. issued a press release announcing the results for its fiscal fourth quarter ending 12/31/2025. If you want a copy of the release, you can access it online on our website.
During the course of this conference call, Arista Networks, Inc. management will make forward-looking statements, including those to our financial outlook for the 2026 fiscal year, longer-term business model and financial outlook for 2026 and beyond, total addressable market and strategy for addressing these market opportunities, including AI, customer demand trends, tariffs and trade restrictions, supply chain constraints, component costs, manufacturing output, inventory management, and inflationary pressures on our business, lead times, product innovation, working capital optimization, and the benefits of acquisitions, which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10-Q and Form 10-K, and which could cause actual results to differ materially from those anticipated by these statements.
These forward-looking statements apply as of today, and you should not rely on them as representing our views in the future. Undertake no obligation to update these statements after this call. This analysis of our Q4 results and our guidance for Q1 2026 is based on non-GAAP and excludes all non-cash stock-based compensation impacts, certain acquisition-required charges, and other non-recurring items. A full reconciliation of our selected GAAP to non-GAAP results is provided in our earnings release. With that, I will turn the call over to Jayshree.
Jayshree Ullal: Thank you, Rudy, and thank you everyone for joining us this afternoon for our fourth quarter and full 2025 earnings call. Well, 2025 has been another defining year for Arista. With the momentum of generative AI and cloud and enterprise, we have achieved well beyond our goal at 28.6% growth driving a record revenue of $9,000,000,000, coupled with non-GAAP gross margin of 64.6% for the year and a non-GAAP operating margin of 48.2%. The Arista 2.0 momentum is clear, as we surpassed 150,000,000 cumulative ports of shipments in Q4 2025. International growth was a good milestone in both Asia and Europe, growing north of 40% annually.
As expected, we have exceeded our strategic goals of $800,000,000 in campus and branch expansion as well as $1,500,000,000 in AI center networking. Shifting to annual customer sector revenue for 2025, cloud and AI titans contributed significantly at 48%. Enterprise and financials recorded at 32%, while AI and specialty providers, which now includes Apple, Oracle, and their initiatives, as well as emerging Neo Cloud, performed strongly at 20%. We had two greater than 10% customer concentration in 2025. Customer A and B drove 16–20% of our overall business. We cherish our privileged partnerships and expand 10 to 15 years of collaborative engineering.
With our ever-increasing AI momentum, we anticipate a diversified customer base in 2026, including one, maybe even two additional 10% customers. In terms of annual 2025 product lines, our core cloud AI and data center products built upon a highly differentiated Arista EOS stack is successfully deployed across 10 gig to 800 gigabit Ethernet speeds, 1.6 terabit migration imminent. This includes our portfolio of 7000 Series platforms for best-in-class performance, power efficiency, high availability, automation, agility, for both the front and back end compute storage, and all of the interconnect zones.
Of course, we interoperate with NVIDIA, the recognized worldwide market leader in GPUs, but also realize our responsibility to broaden the OpenAI including leading companies such as AMD, Anthropic, Arm, Broadcom, OpenAI, Pure Storage, and Vast Data to name a few, that create the modern AI stack of the twenty-first century. Arista is clearly emerging as the gold standard terabit network to run these intense training and inference models processing tokens, at teraflop. Arista's core sector revenue was driven at 65% of revenue. We are confident of our number one position in market share in high-performance switching according to most major industry analysts.
We launched our Blue Box initiative, offering enriched diagnostics of our hardware platforms dubbed NetDI that can run across both our flagship EOS and our open NOS platforms. We saw an excellent uptick in 800 gig adoption in 2025, gaining greater than 100 customers cumulatively for our Etherlink products. And we are co-designing several AI rack systems with 1.6 switching emerging this year. With our increased visibility, we are now doubling from 2025 to 2026 to $3,250,000,000 in AI networking revenue. Our network adjacencies market is comprised of routing, replacing routers, and our cognitive AI-driven AIVA campus. Our investments in cognitive wired and wireless zero-touch operation, network identity, scale and segmentation, get several accolades in the industry.
Our open modern stacking with SWAG switched aggregation group and our recent VESPA for layer two and layer three wired and scale are compelling campus differentiators. Together with our recent VeloCloud acquisition in July 2025, we are driving that homogenous secure client to branch to campus solution with unified management domains. Looking ahead, we are committed to our aggressive goal of $1,250,000,000 for 2026 for the cognitive campus and branch. We have also successfully deployed in many routing edge coarse spine, and peering use cases.
In Q4 2025, Arista launched our flat 7800R4 spine for many routing use cases, including DCI, AI spine, with that massive 460 terabytes of capacity to meet the demanding needs of multi-service routing, AI workloads, and switching use cases. The combined campus and routing adjacencies together contribute approximately 18% of revenue. Our third and final category is the network software and services based on subscription models, such as A-Care, CloudVision, observability, advanced security, and even some branch edge services. We added another 350 CloudVision customers a day
Jayshree Ullal: Per day and deployed an aggregate of 3,000 customers with CloudVision over the past decade. Arista's subscription-based network services and software revenue contributed approximately 17%, and please note that it does not include perpetual software licenses that are otherwise included in core or adjacent markets. Arista 2.0 momentum is clear. We find ourselves at the epicenter of mission-critical network transactions. We are becoming the preferred network innovator of choice for client to cloud and AI networking with a highly differentiated software stack and a uniform CloudVision software foundation. We are proud to power Warner Brothers distribution network streaming for 47 markets in 21 languages in the pan-European win Winter Olympics that is happening as I speak.
We are now north of 10,000 cumulative customers and I am particularly impressed with our traction in the $5,000,000 to $10,000,000 customer category as well as the $1,000,000 customer category in 2025. Arista's 2.0 vision resonates with our customers who value us leading the transformation from incongruent silos to reliable centers of data. The data can reside as campus centers, data centers, WAN centers, or AI centers regardless of their location. Networking for AI has achieved production scale with an all Ethernet-based Arista AI center. In 2025, we are a founding member of the Ethernet-based standard for both scale up with ESUN as well as completing the Ultra Ethernet Consortium 1.0 specification for scale out AI networking.
These AI centers seamlessly connect the back end AI accelerators to the front end of compute storage, WAN, and classic cloud networking. Our AI accelerated networking portfolio consisting of three families of EtherLink Spine Leaf fabric are successfully deployed in scale up, scale out, and scale across networks. Network architectures must handle both training and inference frontier models to mitigate congestion. For training, the key metric is obviously job completion time, the amount of time taken between admitting a job training job to an AI accelerator cluster and the end of a training run. For inference, the key metric is slightly different.
It is the time taken to a first token, basically, the amount of latency it takes for a user submitting a query to receive their first response. Arista clearly developed a full AI suite of features to uniquely handle the fidelity of AI and cloud workloads in terms of diversity, duration, size of traffic flow, and all the patterns associated with it. Our AI for networking strategy based on AIVA, autonomous virtual assist, curates the data for higher-level functions. Together with our published, described state foundation in EOS, NetDL, or network data lake, we instrument our customers' networks to deliver predictive and prescriptive features for enhanced security, observability, and agentic AI operations.
Coupled with the Arista validated designs for simulation, digital twin, and validation functionality, Arista platforms are perfectly optimized and suited for network as a service. Our global relevance with customers and channels is increasing. In 2025 alone, we conducted three large customer events across three continents, Asia, Europe, and United States, and many other smaller ones, of course. We touched 4,000 to 5,000 strategic customers and partners in the enterprise. While many customers are struggling with their legacy incumbents, Arista is deeply appreciated for redefining future of networking. Customers have long appreciated our network innovation and quality demonstrated by our highest net promoter score of 93% and lowest security vulnerabilities in the industry.
We now see the pace of acceptance and adoption accelerating in the enterprise customer base. Our leadership team, our newly appointed co-presidents, Kenneth Duda and Todd Nightingale, have driven strategic and cohesive execution. Tyson Lamoreaux, our newest senior vice president, who joined us with deep cloud operator experience, has ignited our hyper growth across our AI and cloud titan customers. Exiting 2025, we are now at approximately 5,200 employees, which also includes the recent VeloCloud acquisition. I am incredibly proud of the entire Arista team and thank you all employees for your dedication and hard work. Of course, our top-notch engineering and leadership team has always steadfastly prioritized our core Arista Way principles of innovation, culture, and customer intimacy.
Well, I think you would agree that 2025 has indeed been a memorable year. And we expect 2026 to be a fantastic one as well. We are amid an unprecedented networking demand with massive and a growing TAM of $100+ billion. And so despite all the news on the mounting supply chain allocation, rising cost of memory, and silicon fabrication, we increased our 2026 guidance to 25% annual growth accelerating now to $11,250,000,000. And with that happy news, I turn it over to Chantelle our CFO. Thank you, Jayshree, and congratulations to you and our employees on a terrific
Chantelle Breithaupt: 2025. As you outlined, this was an outstanding year for the company, and that strength is clearly reflected in our financial results. Let me walk through the details. To start off, total revenues in Q4 were $2,490,000,000 up 28.9% year over year and above the upper end of our guidance of $2,300,000,000 to $2,400,000,000. It was great to see that all geographies achieved strong growth within the quarter. Services and subscription software contributed approximately 0.1% of revenue in the fourth quarter, down from 18.7% in Q3, which reflects the normalization following some nonrecurring VeloCloud service renewal in the prior quarter. International revenues for the quarter came in at $528,300,000 or 21.2% of total revenue, up from 20.2% last quarter.
This quarter-over-quarter increase was driven by a stronger contribution from our large global customers across our international markets. The overall gross margin in Q4 was 63.4%, slightly above the guidance of 62% to 63%, and down from 64.2% in the prior year. This year-over-year decrease is due to the higher mix of sales to our cloud and AI Titan customers in the quarter. Operating expenses for the quarter were $397,100,000 or 16% of revenue, up from the last quarter at $383,300,000. R&D spending came in at $272,000,000 or 11% of revenue, up from 10.9% last quarter. Arista continued to demonstrate its commitment and focus on networking innovation, with a fiscal year 2025 R&D spend at approximately 11% of revenue.
Sales and marketing expense was $98,300,000 or 4% of revenue, down from $109,500,000 last quarter. FY 2025 closed the year with sales and marketing at 4.5%, representative of the highly efficient Arista go-to-market model. Our G&A cost came in at $26,300,000, 1.1% of revenue, up from $22,400,000 last quarter, reflecting continued investment in systems and processes to scale Arista 2.0. Fiscal year 2025 G&A expense held at 1% of revenue. Our operating income for the quarter was $1,200,000,000 or 47.5% of revenue. This strong Q4 finish contributed to an operating income result for fiscal year 2025 of $4,300,000,000 or 48.2% of revenue. Other income and expense for the quarter was a favorable $102,000,000 and our effective tax rate was 18.4%.
This lower-than-normal quarterly tax rate reflected the release of tax reserves due to the expiration of the statute of limitations. Overall, this resulted in net income for the quarter of $1,050,000,000 or 42% of revenue. It is exciting to see Arista delivering over $1,000,000,000 in net income for the first time. Congratulations to the Arista team on this impressive achievement. Our diluted share number was 1,276,000,000 shares, resulting in a diluted earnings per share for the quarter of $0.82, up 24.2% from the prior year. For fiscal year 2025, we are pleased to have delivered a diluted earnings per share of $2.98, a 28.4% increase year over year. Now turning to the balance sheet.
Cash, cash equivalents, and marketable securities ended the quarter at approximately $10,740,000,000. In the quarter, we repurchased $620,100,000 of our common stock at an average price of $127.84 per share. Within fiscal 2025, we repurchased $1,600,000,000 of our common stock at an average price of $100.63 per share. Of the $1,500,000,000 repurchase program approved in May 2025, $817,900,000 remain available for repurchase in future quarters. The actual timing and amount of future repurchases will be dependent on market and business conditions, stock price, and other factors. Now turning to operating cash performance for the fourth quarter. We generated approximately $1,260,000,000 of cash from operations in the period.
This result was an outcome of strong earnings performance with an increase in deferred revenue offset by an increase in accounts receivable driven by higher shipments and end-of-quarter service renewals. DSOs came in at 70 days, up from 9 days in Q3, driven by renewals and the timing of shipments in the quarter. Inventory turns were 1.5 times, up from 1.4 last quarter. Inventory increased marginally to $2,250,000,000, reflecting diligent inventory management across raw and finished goods. Our purchase commitments at the end of the quarter were $6,800,000,000, up from $4,800,000,000 at the end of Q3. As mentioned in prior quarters, this expected activity mostly represents purchases for chips related to new products and AI deployments.
We will continue to have some variability in future quarters, due to the combination of demand for our new products, component pricing, such as the supply constraint on DDR4 memory, and the lead times from our key suppliers. Our total deferred revenue balance was $5,400,000,000, up from $4,700,000,000 in the prior quarter. In Q4, the majority of the deferred revenue balance is product related. Our product deferred revenue increased approximately $469,000,000 versus last quarter. We remain in a period of ramping our new products, winning new customers, and expanding new use cases, including AI. These trends have resulted in increased customer-specific acceptance clauses and an increase in the volatility of our product deferred revenue balances.
As mentioned in prior quarters, the deferred balance can move significantly on a quarterly basis independent of underlying business drivers, reflecting the timing of inventory receipts and payments. Accounts payable days were 66 days, up from 55 days in Q3. Capital expenditures for the quarter were $37,000,000. In October 2024, we began our initial construction work to build expanded facilities in Santa Clara and incurred approximately $100,000,000 in CapEx during fiscal year 2025 for the project. As we move through 2025, we have gained visibility and confidence for fiscal year 2026. As Jayshree mentioned, we are now pleased to raise our 2026 fiscal year outlook to 25% revenue growth, delivering approximately $11,250,000,000.
We maintain our 2026 campus revenue goal of $1,250,000,000, and raise our AI centers goal from $2.75 to $3,250,000,000.
Jayshree Ullal: For gross margin, we reiterate the range for the fiscal year 62% to 64% inclusive of mix and anticipated supply chain cost increases for memory and silicon
Chantelle Breithaupt: In terms of spending, we expect to continue to invest in innovation, sales, and scaling the business to ensure our status as a leading pure play networking company. With our increased revenue guidance, we are now confident to raise the operating margin outlook to approximately 46% in 2026. On the cash front, we will continue to work to optimize our working capital investments with some expected variability in inventory due to the timing of component receipts on purchase commitments. Our structural tax rate is expected at 21.5% back to the usual historical rate, up from the seasonally lower rate of 18.4% experienced last quarter Q4 2025.
With all of this as a backdrop, our guidance for the first quarter is as follows. Revenues of approximately $2,600,000,000, gross margin between 62–63% and operating margin at approximately 46%. Our effective tax rate is expected to be approximately 21.5% with approximately 1,275,000,000 diluted shares. In closing, at our September Analyst Day, we had a theme of building momentum, and we are doing just that. In the campus, WAN, data, and AI centers, we are uniquely positioned to deliver what customers need. We will continue to deliver both our world-class customer experience and innovation. I am enthusiastic about our fiscal year ahead. Now back to you, Rudy, for Q&A.
Jayshree Ullal: Thank you, Chantelle.
Operator: We will now move to the Q&A portion of the Arista earnings call. To allow for greater participation, I would like to request that everyone please limit themselves to a single question. Thank you for your understanding. Regina, please take it away.
Operator: We will now begin the Q&A portion of the Arista earnings call. Keypad. If you would like to withdraw your question, press star and the number one again. Please pick up your handset before asking questions to ensure optimal sound quality. Our first question will come from the line of Meta Marshall with Morgan Stanley. Please go ahead. Great. And congratulations on the quarter. I guess in terms of kind of the commentary you had, Jayshree, on the one or
Meta Marshall: two additional 10% customers. I guess, digging more into that, what are the puts and takes of is it bottlenecks in terms of their building? Is it like, what would make or break kind of whether those become two new kind of 10% customers? Thank you.
Jayshree Ullal: Thank you, Meta, for the good wishes. So, obviously, if I did not have confidence, would not dare to say that, would I? But there are always variables. It may be sitting in deferred, so there is an acceptance criteria that we have to meet. And there is also timing associated with meeting the acceptance criteria. Some of it is demand that is still underway, and in this age of all the supply chain allocation and inflation, we have to be sure we can ship. So we do not know it is exactly a 10% or high single digits, or low double digits, but a lot of variables will decide that final number. But, certainly, the demand is there.
Meta Marshall: Great. Thank you.
Jayshree Ullal: Thank you.
Operator: Our next question will come from the line of Samik Chatterjee with JPMorgan. Please go ahead.
Operator: And Jayshree,
Samik Chatterjee: congrats on the quarter and the outlook. I do not want to sort of say that the 25% growth is not impressive, but since you are doing 30% is what the guidance is for Q1, maybe if I could understand what is maybe sort of leading to somewhat of a cautious terms of visibility for the rest of the year?
Or is it these sort of one to two new customers and their ramps that you are sort of more cautious about, or is it availability of supply in some of the components or memory that is sort of giving you maybe a bit more nervousness about the visibility for the remainder of the year, if you could understand the drivers there.
Jayshree Ullal: Thank you for that. Thank you. Thank you, Samik. First, I do not think I am being cautious. I think I went all out to give you a high dose of reality, but I understand your views on caution given all the capital numbers you see from customers. That is an important thing to understand that we do not track the CapEx. The first thing that happens in the CapEx is they have to build the data centers and get the power and get all of the GPUs and accelerators. The network lags a little.
So demand is going to be very good, but whether the shipments exactly fall into 2026 or 2027, Todd, you can clarify when they really fall in. But there are a lot of variables there. That is one issue. The second, as I said, is a large amount of these are new products, new use cases, highly tied to AI, customers are still in their first innings. So, again, I am giving you the greatest visibility I can, fairly early in the year, on the reality of what we can ship, not what the demand might be. It might be a multiyear demand that ships over multiple years.
So let us hope it continues, but, of course, you must understand that we are also facing a law of large numbers. So 25% on a base of now $9,000,000,000 when we started last year at $8.25. It is a really, really early and good start.
Jayshree Ullal: Thank you.
Operator: Our next question will come from the line of David Vogt with UBS. Please go ahead.
Operator: Great. Thanks guys for taking my
David Vogt: Maybe, Chantelle and Jayshree, can you help quantify sort of both the revenue impact and potential kind of gross margin impact embedded in your guide from the memory dynamics and the constraints? I know last quarter, and you even mentioned it this quarter, obviously, supply chain does have some constraints. When you think about I think, Jayshree, you just said kind of the real outlook that you see. Maybe can you help parameterize what you think could hold you back if that is the way to phrase it, and just give us a sense for what upside could be, in a perfect world effectively, if you could share that.
Jayshree Ullal: I am going to give some general commentary, Chantelle, if you do not mind adding to it. Our peers in the industry have been facing this probably longer than we have, because the server industry probably saw it first because they are more memory intensive. Add to that we are expecting increases from silicon fabrication that all the chips are made, as you know, essentially with one Taiwan Semiconductor. So Arista has taken a very thoughtful approach being aware of this since 2025 and, frankly, absorbed a lot of the cost in 2025 that we were incurring. However, in 2026, the situation has worsened significantly.
We are having to smile and take it just about at any price we can get. And the prices are horrendous. They are an order of magnitude exponentially higher. So, clearly, with the situation worsening and also expected to lap multiple years, we are experiencing shortages in memory. Thankfully, as you can see reflected in our purchase commitments, we are planning for this, and I know that memory is now the new gold for the AI and automotive sector.
Jayshree Ullal: But
Jayshree Ullal: be easy, but it is going to favor those who plan and those who can spend the money for it.
Chantelle Breithaupt: So yes. And I think the only other thing I would add to your question, David, and thank you for that, is that we are comfortable in the guide, and that is why we have the guide and why we raised the numbers that we did. So we are comfortable we have a path to there within the numbers we provided. The range of 62 to 64, I think we are pleased to hold despite this kind of pressure coming into it. This has been our guide since September at our Analyst Day, so we are pleased to hold that guide and find ways to mitigate this journey.
Now whether it ends up being 62.5 versus 63.5 in the guide in that range, that is where we will continue to update you, but the range we are comfortable with.
Operator: Understood. Thanks, guys.
Jayshree Ullal: Thank you, David.
Operator: Our next question comes from the line of Aaron Christopher Rakers with Wells Fargo. Please go ahead.
Operator: Yes. Congrats as well on the quarter and the guide. I guess when we think about the $3,250,000,000 guide for the AI contribution this year, I am curious
Aaron Christopher Rakers: Jayshree, how much you are factoring, if any, from scale up networking opportunity, how do you see
Jayshree Ullal: Yes.
Aaron Christopher Rakers: Is that more still of a 2027? And
Aaron Christopher Rakers: and, also, can you unpack, like, ex the AI and ex the campus contribution it appears that your guiding is still pretty muted. Low single-digit growth on non-AI. Just curious to how you see the non-AI, non-campus
Jayshree Ullal: Okay. Well, a rising tide rises all boats, but some go higher and some go lower. But to answer your specific question what was it on?
Aaron Christopher Rakers: How much scale up? Scale up. Oh, how much scale up? We have consistently
Jayshree Ullal: described that today’s configurations are mostly a combination of scale out and scale up. We are largely based on 800 gig and smaller radix. Now that the ESUN specification is well underway, and Ken Duda, you can, I think the spec will be done in a year? Or this year for sure. So Ken and Hugh Holbrook are actively involved in that, need a good solid spec. Otherwise, we will be shipping proprietary products like some people in the world do today. And so we will tie our scale up commitment greatly to availability of new products and a new ESUN spec, which we expect the earliest to be Q4 this year.
And therefore, majority of the we will be in some trials with a lot of, you know, Andy Vetishan and the team is working on a lot of active AI racks with scale up in mind. But the real production level will be in 2027. Primarily centered around not just 800 gig, but 1.6T.
Chantelle Breithaupt: I think that thank you. Regarding
Operator: oh, okay.
Chantelle Breithaupt: Thank you, Aaron. Our next question will come from the line of Amit Jawaharlaz Daryanani with Evercore ISI. Please go ahead.
Amit Jawaharlaz Daryanani: Yep. Thanks a lot, and congrats from my end as well for some really good numbers here. Jayshree, if I think some of these model builders like Anthropic that I think you folks have talked about, they are starting to build these multibillion dollar clusters on their own now. Can you just talk about your ability to participate in some of these buildouts as they happen, be that on the DCI side or maybe even beyond that? And by extension, does this give you an opportunity to ramp up with some of the larger cloud companies that these model builders are partnering with over time as well as they build out TP or training clusters.
Love to just understand how that kind of business scales up you folks. Thank you. Yeah. No. Amit, that is a very
Jayshree Ullal: thoughtful question, and I think you are absolutely right. The network infrastructure is playing a critical role with these model builders in a number of ways. If you look at us initially, we were largely working with one or two model builders and one or two accelerators, NVIDIA and AMD and OpenAI was the primarily dominant one. But today, we see that there is really multiple layers in a cake where you have the GPU accelerators. Of course, you have power as the most difficult thing to get. But Arista needs to deal with multiple domains and model builders and appropriately whether it is Gemini or xAI or Anthropic Cloud or OpenAI and many more coming.
These models and the multiprotocol, algorithmic nature of these models is something we have to make sure we build an effort correctly for. That is one. And then to your second point, you are absolutely right. I think the biggest issue is not only the model builders, but they are no more in silos in one data center. And you are going to see them across multiple colos and multiple locations and multiple partnerships with our cloud titan customers that we have historically not worked with this. So I think you will see more Copilot versions of it, if you will, with a number of our cloud titans. So we expect to work with them as AI specialty providers.
We also expect to work with our cloud titans, and bringing the cloud and AI together.
Amit Jawaharlaz Daryanani: Thank you. Thank you, Amit.
Operator: Our next question comes from the line of George Charles Notter with Wolfe Research. Please go ahead.
Operator: Hi, guys. Thanks very much. I was just curious about the product deferred revenue and how you
George Charles Notter: see that coming off the balance sheet ultimately. Obviously, it has just been stacking up here quarter after quarter after quarter. So a few questions here. Does that come off in big chunks that we will see in different quarters in the future? Does it come off more gradually? Does it continue to build? What does the profile look like for that product deferred coming off the balance sheet and pulling through the P&L? Then also, I am curious about how much product deferred do you have in the full year revenue guidance to 25%? Thanks a lot.
Chantelle Breithaupt: Hey, George. Thanks for the questions. Not much has changed in the sense of how we have this conversation. What goes into deferred is new product, new customers, new use cases. The great new use case is AI. The acceptance criteria for that for the larger deployments is 12 to 18 months. Some can be as short as six months, so there is wide variety that goes in. Deferred has balances coming in and out every quarter. We do not guide deferred, and we do not say product specific. What I can tell you in your question is that there will be times where there are larger deployments that feel a little lumpier as we go through.
But, again, it is a net release of a balance, so it depends what comes in at that same quarter time.
Operator: Got it. Okay. Any sense for what is in the full year guide then? I assume
George Charles Notter: not much. Is that fair to say?
Jayshree Ullal: It is super hard, George. It is when the acceptance criteria happens. If it happens December, it is a different situation. If it all happens in Q2, Q3, Q4, that is a different. So that is something we really have to work with the customer. So thank you. Sorry that we are not able to be clairvoyant on that.
Operator: Makes sense. Thank you.
Jayshree Ullal: Thank you. Thank you.
Operator: Our next question comes from the line of Benjamin Reitzes with Melius Research. Please go ahead.
Operator: Hey, thanks a lot, and I guess my congrats to you guys. You
Benjamin Reitzes: this execution and guide is really something. So I wanted to ask you are welcome. I wanted to ask about two things that I just was wondering if you could talk a little bit more about your Neo Cloud momentum and what that is looking like in terms of materiality, and then also, if you do not mind touching on AMD, with the launch, we are kind of hearing about you getting a lot of networking attached to the 450 type product or their new chips. Wondering if that is a catalyst or not as you go throughout the year. Thanks so much.
Jayshree Ullal: Yes. So Ben, as you can imagine, the specialty cloud providers have historically had a cacophony of many types of providers. We are definitely seeing AI as one of the clear impetuses. It used to be content providers, tier two cloud providers, but AI is clearly driving that section. And it is a suite of customers, some of who have real financial strength and are looking now to invest and increase and pivot to AI. So the rate at which they pivot in AI will greatly define how well we do this. And they are not yet titans, but they want to be or could be titans, the way to look at it.
And we are going to invest with them, and these are healthy customers. It is nothing like the dot-com heroes. We feel good about that. There are a set of neo clouds that we watch more carefully, because some of them are oil money converted into AI or crypto money converted into AI. And over there, we are going to be much more careful because some of those neo clouds are looking at Arista as the preferred partner. But we would also be looking at the health of the customer, or they may just be a one-time
Jayshree Ullal: We do not know the
Jayshree Ullal: exact nature of their business, and those will be smaller. And they do not contribute in large dollars, but they are becoming
Jayshree Ullal: increasingly
Jayshree Ullal: plentiful in quantity even if they are not yet in numbers. I think you are seeing this dichotomy of two types in that category. Or three types. The classic CDN and security specialty providers, tier two cloud, the AI specialty are going to lean in and invest, and then the neo cloud in different geographies.
Benjamin Reitzes: And the AMD?
Jayshree Ullal: Yes. The AMD question. A year ago, I think I said this to you, but I will repeat it. A year ago, it was pretty much 99% NVIDIA. Today, when we look at our deployments, we see about 20%, maybe a little more, 20 to 25%, where AMD is becoming the preferred accelerator of choice. And in those scenarios, Arista is clearly preferred because they are building best-of-breed building blocks for the NIC, for the network, the I/O, and they want open standards as opposed to full-on vertical stack from one vendor.
So you are right to point out that AMD, and in particular, it is a joy to work with Lisa and Forrest and the whole team, and we do very well in that multivendor open configuration.
Operator: Our next question will come from the line of Timothy Long with Barclays. Please go ahead.
Operator: Thank you. Yeah. Appreciate all the color.
Timothy Long: Jayshree, maybe we could touch a little bit on scale across. It is obviously gotten a lot of attention, particularly on the optics layer from some others in the industry. Obviously, you guys have
Operator: been in DCI, which is kind of a similar type technology. But curious what you think as far as Arista's participation in more of these next-gen scale across networks? And is this something that would be good for, like, a Blue Box type of product, or would that more be in the scale up? So if you can give a little color there, that would be great.
Jayshree Ullal: Right. So most of our participation today, we thought would be scale out. But what we are finding is due to the distributed nature of where and they can get the power and the bisectional bandwidth growth where it is the throughput, scale out or scale across is all about how much data you can move. As the workloads become more and more complex, you have to make them more and more distributed because you just cannot fit them in one data center, both from a power, bandwidth, throughput, capacity. Also, these GPUs are trying to minimize the collective degradation, so as you scale up or out, the communication patterns become very much of a bottleneck.
And one way to solve it is to extend this across data centers both through fiber and, as you rightly pointed out, a very high injection bandwidth DCI routing. And then there is a sustained real-world utilization you need across all of these. So for all these reasons, we are pleasantly surprised with the role of coherent long-haul optics, which we do not build, but we have worked in the past very greatly with companies that do, and they are seeing the lift. And the 7800 spine chassis as the flagship platform and preferred choice that has been designed by our engineering team now for several years for this robust configuration.
So it is Blue Box there and much, much more of a full-on Arista flagship box with EOS and all of the virtual output queuing and buffering to interconnect regional centers with extremely high levels of routing and high availability too. So this really lends into everything Arista stands for coming altogether in a universal AI spine.
Timothy Long: Okay. Excellent. Thank you, Jayshree.
Jayshree Ullal: Thank you.
Operator: Our next question will come from the line of Karl Ackerman with BNP Paribas. Please go ahead.
Operator: Yes. Thank you.
Karl Ackerman: Agentic AI should support an uptick in conventional server CPUs where you have where your switches have high share within data centers. And so getting your upwardly revised
Operator: outlook of 25% growth for this year, could you to the demand process you are seeing for front end
Karl Ackerman: high-speed switching products that address agentic AI products?
Operator: Thank you.
Chantelle Breithaupt: Yeah.
Jayshree Ullal: Exactly, Karl. I think in the beginning, let us just go back time and history. It is not that long ago. Three years ago, we had no AI. We were staring at InfiniBand being deployed everywhere in the back end. And we pretty much characterized our AI as only back end just to be pure about it. Three years later, I am actually telling you we might do north of $3,000,000,000 this year and growing. That number definitely includes the front end, as it is tied to the back end GPU clusters, and it is an all Ethernet all AI system for agentic AI applications.
Now a lot of the agentic AI applications are mostly running with some of our largest cloud AI and specialty providers. But I do not rule out the possibility. You could see this in our numbers. With north of 8,800 gig customers many of that is going to feed into the enterprise as well as agentic AI applications come for genomic sequencing, science, automation of software. I do not think any of us believe that AI is eating software. AI is definitely enabling better software. And we are certainly seeing that and Ken can see them as well in our adoption of that.
So the rise of agentic AI will only increase not just the GPU, but all gradations of XPU that can be used in the back end and front end.
Jayshree Ullal: Thank you.
Jayshree Ullal: Thank you, Karl.
Operator: Our next question comes from the line of Simon Matthew Leopold with Raymond James. Please go ahead.
Operator: Thank you very much for taking the question. I wanted to come back on the issue around sort of what is going on with the memory market. So two aspects to this is one, I am wondering how much of a tool has been price hikes, you raising your prices to customers, and or whether or not within the substantial amount of purchase commitments you have whether there is a significant aspect of memory in there so you pre-purchased memory effectively at much lower prices than the spot market today.
Operator: Thank you.
Chantelle Breithaupt: Thank you. Okay. I wish I could tell you we did
Jayshree Ullal: purchase all that memory that we needed. No. We did not. But while our peers in the industry have done multiple price hikes already, especially those in the server market or memory intensive switches, we have clearly been absorbing it. And memory is in our purchase commitments. But so is everything else. The entire silicon portfolio is in our purchase commitments. Due to some of the supply chain reactions, Todd and I have been reviewing this, and we do believe there will be a one-time increase on select especially memory-intensive SKUs to deal with it, and we cannot absorb it if the prices keep going up the way they have in January and February.
And I would tell you that all the purchase commitments I have in my current, in Chantelle's current commitments are not enough. We need more memory.
Operator: Thank you.
Operator: Our next question will come from the line of James Edward Fish with Piper Sandler. Please go ahead.
Operator: Ladies, great quarter, great end of the year. Jayshree, are hyperscalers getting nervous now at all in ordering ahead? What is your sense of pull-in of demand potentially here, including for your own Blue Box initiative? And, Chantelle, for you, just going back to George's question, are you, I know it is difficult to answer, but are you anticipating that product deferred revenue is going to continue to grow through the year? Or it is way too difficult to predict and you have got customers that could just say, you know, we accept, ship them all now, and so we end up with a big quarter, but product deferred down.
Jayshree Ullal: I am going to let Chantelle answer this difficult question over and over again. Sure. Go ahead, Chantelle. Happy. Thank you, James. I appreciate it.
Chantelle Breithaupt: So I think for deferred, generally, we do not guide deferred, but to try to give you more insight, there will be, back to George's question, there will be certain deployments that get accepted and released. But the part that is difficult is what comes into the balance, right James, so I cannot guide that. That would be a wild guess on what is going to go in, which is not prudent, I think, from my perspective. So we will continue to mention what is in it. We will continue to guide you through the balances. We will talk about it in the script in the sense of the movement.
But that is probably as much as I can tell you with the responsible answer looking forward
Jayshree Ullal: James, this is one of those times, no matter how many times you advocate discussion in several different ways, the answer does not change.
Jayshree Ullal: It is okay. I mean, well and then
James Edward Fish: insanity is doing the same thing over and over again. Yes. I know. I know.
Jayshree Ullal: On the hyperscaler, are they getting nervous? I do not think they are getting nervous. You have seen what a strong business they had, how much cash they put out, and how successful they are. But I do think they are working more closely with us. Typically, we had a three to six month visibility. We are getting rid of this.
Operator: Our next question will come from the line of Tal Liani with Bank of America. Please go ahead.
Operator: I almost had the same question to you what I asked you last quarter. Because you group you increased the guidance,
Jayshree Ullal: We have changed the thousand ninety. Yeah. No. I will
Operator: explain. You increased the guidance, but the entire increase in the guidance is basically the cloud. And if I look at
Operator: it is very simple
Operator: to
Operator: dissect your numbers. If I remove campus and I remove cloud and you provide these two numbers for both 2025 and 2026. The rest of the business, which is 60% of the business, you guide it to grow zero. And in previous years, it was, I can make estimate, it was anywhere from 10% to 30% growth. So the question is, why are you guiding this way that 60% of the business is not going to grow
Operator: is it because
Operator: the
Tal Liani: Okay. Can I can I
Jayshree Ullal: No? Can I pause you there? Because I know you like to dissect our math several different ways and come up with conclusions. We are not guiding that our business is going to be flat or we are not going to grow here or grow there. But, generally, when something is very fast-paced and growing, then other things grow less. And exactly whether it would be flat or grow double digits or single digits, Tal, it is February. I do not know what the rest of the year will be. Okay?
Tal Liani: So I
Jayshree Ullal: No. But that is the question.
Tal Liani: The question is, are there allocations here? Meaning, if you, let us say you have only set number of, you know, memory slots so you allocate it to cloud and then the rest of the business does not get it. Or is it just conservatism and lack of ability to. It is either
Jayshree Ullal: it is neither of the above. We do not allocate to our customers. It is first in, first served. And in fact, the enterprise customers get a very high sense of priority as do our cloud. Customers come first. But allocation of memory may allow us to be in a situation where the demand is greater than our ability to supply. We do not know. It is too early in the year. We are confident that we could guide, six months after our Analyst Day, to a higher number. We do not know what the next four quarters will look like to the precision you are asking for.
Tal Liani: Got it. Thank you.
Chantelle Breithaupt: Thank you.
Jayshree Ullal: Our next question comes from the line of Atif Malik with Citi.
Chantelle Breithaupt: Please go ahead.
Operator: Hi. It is Adrienne Colby for Atif. Thank you for taking my question. I was hoping to ask for an update on the Arista four large AI customers. I know that fourth customer you talked about was a bit slower to ramp to 100,000 GPUs. Just wondering if you can update us on their progress there and perhaps what is next for the other few customers that have already crossed that threshold. And lastly, is there any indication that
Chantelle Breithaupt: fifth customer that ran into funding challenges might come back to you?
Jayshree Ullal: Okay.
Jayshree Ullal: Adrienne, I will give you some update. I am not sure I have precise updates, but we are in all four customers deploying AI with Ethernet. That is the good news. Three of them have already deployed a cumulative of 100,000 GPUs, and are now growing from there. And clearly, migrating now into beyond pilots and production to other centers, power being the biggest constraint. Our fourth customer is migrating from InfiniBand. So it is still below 100,000 GPUs at this time. But I fully expect them to get there this year, and then we shall see how they get beyond that.
Operator: Our next question will come from the line of Michael Ng with Goldman Sachs. Please go ahead.
Operator: Hey, good afternoon. Thank you for the question. I just have one and one follow-up. First, I was wondering if you could
Operator: talk a little bit about the new customer segmentations that you guys
Michael Ng: unveiled with Cloud and AI and AI and specialty. What is the philosophy around that? And does that kind of signal more opportunity in places like Oracle and the Neo Clouds?
Michael Ng: And then second,
Michael Ng: with cloud and AI at 48% of revenue and A and B, I think, combined 36, you have 12% left over. Is that a hyperscale customer? Does it kind of imply that you have a new hyperscaler that is approaching 10%? Because, obviously, we thought that the next biggest one would have been Oracle, but that is moved out of cloud now. So thoughts there would be great. Thank you. Yeah. Yeah. Sure, Michael. So
Jayshree Ullal: well, first of all, my math is 26 to 16, so it is 42. So I do not have 12%. Unless you had 58. It is really only 6%. So on the cloud and AI titans, the way we classify that is it is significantly large scale customers with greater than a million servers, greater than 100,000 GPUs, an R&D focus on models and sometimes even their own XPUs, and this can, of course, change. Some others may come into it. But it is a very select few set of customers, less than five or about five. That is the way to think of it.
On the change on the specialty cloud, we are noticing that some customers are really, really focused solely on AI, with some cloud as opposed to cloud with some AI. So when it is a heavily set AI-centric,
Operator: we
Jayshree Ullal: especially with Oracle's AI, Acceleron, and multitenant partnerships that they have created, they have naturally got a dual personality. Some of which is OCI, the Oracle Cloud, but some of it is really AI, fully AI-based. So the shift in their strategy made us shift the category and bifurcate the two.
Michael Ng: Thank you, Jayshree.
Jayshree Ullal: Thank you.
Todd Nightingale: Regina, we have time for one last question.
Operator: Our final question will come from the line of Ryan Boyer Koontz with Needham and Company. Please go ahead.
Operator: Great. Thanks for squeezing me in. Jayshree, in your prepared remarks, you talked about your telemetry capabilities. I am wondering if you could expand on that and discuss where you are seeing that differentiation, what sorts of use cases you are able to really seize upper hand competitively with your telemetry capabilities. Thank you.
Jayshree Ullal: Yeah. I am going to say some, and I think Ken, who has been designing this and working on it, will say even more. Kenneth Duda, our president and CTO. So telemetry is at the heart of both our EOS software stack as well as our CloudVision for enterprise customers. We have a real-time streaming telemetry that has been with us since the beginning of time, and it is constantly keeping track of all of switches. It is not just a pretty management tool. And at the same time, our cloud customers and AI seeking some of that visibility too, and so we have developed some deeper AI capabilities for telemetry as well.
Over to you, Ken, for some more detail.
Kenneth Duda: Yeah. No. Thanks for that question. That is great. Look. The EOS architecture is based on state orientation. This is the idea that we capture the state of the network and we stream that state out
Operator: from the system database on the switches into whatever the telemetry or whatever system can then receive it.
Kenneth Duda: And we are extending that capability for AI
Operator: with a combination of
Kenneth Duda: in-network data sources related to flow control, RDMA counters, buffering, congestion counters, and also host-level information including what is going on in the RDMA stack on the host,
Operator: going on with collectives, latencies, any flow control problems or buffering problems in the host NIC, then we pull those
Kenneth Duda: information altogether in CloudVision and give the operator a unified view of what is happening in the network and what is happening in the host. And this greatly aids our customers in building an overall working solution because the interactions within the network and the host can be complicated and difficult to debug when it is different systems collecting them.
Jayshree Ullal: Great job, Ken.
Operator: That is right. I cannot wait for that product. Really helpful.
Operator: Thank you. This concludes Arista Networks, Inc. fourth quarter 2025 earnings call. We have posted a presentation that provides additional information on our results, which you can access on the investor section of our website.
Operator: Thank you for joining us today and for your interest in Arista.
Operator: Thank you for joining, ladies and gentlemen. This concludes today's call.
Before you buy stock in Arista Networks, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Arista Networks wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you’d have $429,385!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you’d have $1,165,045!*
Now, it’s worth noting Stock Advisor’s total average return is 913% — a market-crushing outperformance compared to 196% for the S&P 500. Don't miss the latest top 10 list, available with Stock Advisor, and join an investing community built by individual investors for individual investors.
See the 10 stocks »
*Stock Advisor returns as of February 12, 2026.
This article is a transcript of this conference call produced for The Motley Fool. While we strive for our Foolish Best, there may be errors, omissions, or inaccuracies in this transcript. Parts of this article were created using Large Language Models (LLMs) based on The Motley Fool's insights and investing approach. It has been reviewed by our AI quality control systems. Since LLMs cannot (currently) own stocks, it has no positions in any of the stocks mentioned. As with all our articles, The Motley Fool does not assume any responsibility for your use of this content, and we strongly encourage you to do your own research, including listening to the call yourself and reading the company's SEC filings. Please see our Terms and Conditions for additional details, including our Obligatory Capitalized Disclaimers of Liability.
The Motley Fool has positions in and recommends Arista Networks. The Motley Fool has a disclosure policy.