Astera Labs (ALAB) Q1 2026 Earnings Transcript

Source Motley_fool
Logo of jester cap with thought bubble.

Image source: The Motley Fool.

Date

Tuesday, May 5, 2026 at 4:30 p.m. ET

Call participants

  • Chief Executive Officer and Co‑Founder — Jitendra Mohan
  • President and Chief Operating Officer and Co‑Founder — Sanjay Gajendra
  • Chief Financial Officer — Desmond Lynch

Need a quote from a Motley Fool analyst? Email pr@fool.com

Takeaways

  • Revenue -- $308.4 million, up 14% sequentially and 93% year over year, driven by growth across both signal conditioning and fabric switch product portfolios.
  • PCIe Gen 6 revenue -- Contributed more than one-third of total company revenue, reflecting broad adoption in both AI fabric and signal conditioning segments.
  • Scorpio product family -- Began shipping initial volumes of Scorpio X Series and expects Scorpio to become the largest product line by year-end, surpassing P Series in revenue contribution.
  • Non‑GAAP gross margin -- 76.4%, up 70 basis points from the previous quarter due to a lower mix of hardware sales within signal conditioning.
  • Non‑GAAP operating expenses -- $123.9 million, including R&D expenses of $96.2 million, sales and marketing expenses of $12 million, and G&A expenses of $15.7 million.
  • Non‑GAAP operating margin -- 36.2%, reflecting continued profitability while investing for sustained revenue growth.
  • Non‑GAAP EPS -- $0.61 per diluted share, with 181.2 million fully diluted shares outstanding.
  • Cash, cash equivalents, & marketable securities -- $1.18 billion at quarter end, flat from Q4 as $74.6 million operating cash flow was offset by acquisition payments.
  • Fiscal Q2 2026 revenue outlook (period ending June 30, 2026) -- Expected between $355 million and $365 million, a 15%-18% sequential increase driven by further adoption of Scorpio, Aries, and Torus products.
  • Fiscal Q2 non‑GAAP gross margin guidance -- Approximately 73%, including a 200 basis point non‑cash impact from a one-time customer agreement.
  • Fiscal Q2 non‑GAAP operating expense guidance -- Projected at $128 million to $131 million.
  • AI platform design wins -- Initial volume shipments of Scorpio X 320‑lane fabric switch are ramping, with expanded hyperscaler design activity and two additional major hyperscalers expected to begin receiving Scorpio P Series late 2026.
  • Optical product progress -- Qualification process progressing at a major AI platform provider for the optical fiber coupler, with volume shipments targeted for 2027; XScale Photonics acquisition fully integrated and contributing to design pipeline.
  • Leo memory controller momentum -- Scheduled for deployment with Microsoft Azure M-series virtual machines and secured a new custom KV cache–oriented application design win, with related shipments expected in 2027.
  • UALink roadmap -- UALink-based switch products planned for initial deployments in 2027, positioning the company for future rack-scale connectivity cycles.

Summary

The quarterly results highlight a significant acceleration in top-line and margin expansion, with sequential and annual revenue growth indicating strong customer demand across key product areas. Product and market developments include production shipments of the Scorpio X Series, ongoing adoption of PCIe Gen 6, and the full integration of the XScale Photonics acquisition, all contributing to diversified growth opportunities. Management emphasized a rapidly broadening pipeline, including optical, memory, and custom solution wins, positioning the company for increased dollar content per accelerator and new hyperscaler engagements in the upcoming quarters.

  • Management stated, "Scorpio will become our largest product line by the end of the year," signaling a material shift in mix and future revenue drivers.
  • CEO Jitendra Mohan said, "We have now shipped millions of PCIe Gen 6 ports to date," demonstrating broad-based deployment and customer traction across the PCIe portfolio.
  • The outlook anticipates fiscal Q2 earnings per share in the range of $0.68 to $0.70 on an expected 184 million diluted shares, with a non‑GAAP tax rate of approximately 12%.
  • Investments in design centers and expanded supply chain were cited to support both near-term ramps and supply assurance into 2027.
  • Volume shipments of new optical connectivity products and initial UALink-based switch deployments are expected to begin in 2027.
  • Enhanced Cosmos software integration is now enabling hardware-accelerated features and performance improvements directly in customer AI platforms.
  • Portfolio expansion was attributed to successful design-ins with major AI ecosystem leaders for both standard and custom-developed connectivity solutions.

Industry glossary

  • PCIe: Peripheral Component Interconnect Express, a high-speed interface standard for connecting various hardware components within data centers and servers, critical for AI and high-performance computing infrastructure.
  • Scorpio X Series / P Series: Astera Labs' families of AI fabric switches, supporting high lane counts and advanced hardware-accelerated features for AI network scaling (X for scale-up, P for diverse system topologies).
  • NPO: Near-Package Optics, integrating optical interconnects close to semiconductor packages to enable high-bandwidth, low-latency connectivity within rack-scale computing.
  • CPO: Co-Packaged Optics, technology embedding optical modules within the same package as networking silicon to increase bandwidth and reduce energy consumption versus traditional pluggables.
  • KV cache: Key-Value Cache, a memory architecture used to accelerate AI inferencing by offloading and optimizing specific memory functions.
  • Aries: Astera Labs' signal conditioning product line, supporting generational PCIe improvements for scale-out and scale-up connectivity.
  • Torus: Astera Labs' line of Ethernet AEC (Active Electrical Cable) modules, extending connectivity reach for both AI and general compute platforms.
  • UALink: An industry consortium specification focusing on open, high-bandwidth AI fabric switching for large-scale compute systems, with Astera Labs participating in switch development.
  • Cosmos software: Astera Labs’ proprietary software stack, enabling advanced diagnostics, telemetry, and performance tuning of its connectivity products within customer systems.

Full Conference Call Transcript

Jitendra Mohan, Chief Executive Officer and Co‑Founder; Sanjay Gajendra, President and Chief Operating Officer and Co‑Founder; and Desmond Lynch, Chief Financial Officer. Before we get started, I would like to remind everyone that certain comments made in this call today may include forward‑looking statements regarding, among other things, expected future financial results, strategies and plans, future operations, and the markets in which we operate.

These forward‑looking statements reflect management's current beliefs, expectations, and assumptions about future events which are inherently subject to risks and uncertainties that are discussed in detail in today's earnings release and in the periodic reports and filings we file from time to time with the SEC, including the risks set forth in our most recent Annual Report on Form 10‑K. It is not possible for the company's management to predict all risks and uncertainties that could have an impact on these forward‑looking statements or the extent to which any factor or combination of factors may cause actual results to differ materially from those contained in any forward‑looking statement.

In light of these risks, uncertainties, and assumptions, all results, events, or circumstances reflected in the forward‑looking statements discussed during this call may not occur and actual results could differ materially from those anticipated or implied. All of our statements are made based on information available to management as of today and the company undertakes no obligation to update such statements after the date of this call except as required by law. Also during the call, we will refer to certain non‑GAAP financial measures which we consider to be an important measure of the company's performance. For example, the overview of our Q1 financial results and Q2 financial guidance are on a non‑GAAP basis.

These non‑GAAP financial measures are provided in addition to, and not as a substitute for, financial results prepared in accordance with U.S. GAAP. A discussion of why we use non‑GAAP financial measures—whose difference is primarily stock compensation, acquisition‑related costs, and related income tax effect—and reconciliations between our GAAP and non‑GAAP financial measures and financial outlook are available in the earnings release we issued today, which can be accessed through the Investor Relations portion of our website. With that, I would like to turn the call over to Jitendra Mohan, CEO of Astera Labs, Inc. Common Stock.

Jitendra Mohan: Thank you, Leslie. Good afternoon, everyone, and thanks for joining our first quarter conference call for fiscal year 2026. Today, I will update you on AI infrastructure market trends, our Q1 results, and recent announcements. I will then turn the call over to Sanjay to discuss Astera Labs, Inc. Common Stock’s growth profile. I would also like to welcome Des, our CFO, joining this call for the first time. Des will cover our Q1 financials and Q2 guidance. Since our last earnings call, AI infrastructure spending has clearly accelerated. Hyperscalers, AI labs, and sovereign entities are signaling the industry buildout is still in its early stages, underpinned by strong monetization and ROI.

We expect these strong secular trends to be a tailwind for Astera Labs, Inc. Common Stock’s growth over the long term. Astera Labs, Inc. Common Stock delivered strong results in Q1 with revenue and non‑GAAP EPS above our outlook. Revenue for the quarter was $308 million, up 14% from the prior quarter and up 93% versus Q1 of last year. Revenue growth was broad‑based, spanning across our signal conditioning and fabric switch product portfolios as we continue to diversify our business profile with new design wins across multiple customers and product categories. Our PCIe 6 business across both AI fabric and signal conditioning was strong in Q1, with revenue expanding to more than one‑third of our total revenue.

We have now shipped millions of PCIe Gen 6 ports to date, demonstrating the robustness and maturity of our PCIe portfolio. Torus smart cable modules for Ethernet AECs continue to perform well as new program designs shift into volume while others ramp to mature levels across GPU, XPU, and general‑purpose systems. On the scale‑out fabric front, our initial design wins with Scorpio X Series in smaller radix configurations shifted from pre‑production shipments to initial volume ramp during the first quarter. Building on this momentum, today we announced the expansion of our Scorpio product line of AI fabric switches for both scale‑up and scale‑out use cases.

Scorpio X Series now supports up to 320 lanes for high‑radix scale‑up networking and Scorpio P Series PCIe 6 portfolio now spans 32 to 320 lanes for diverse system topologies, making it the broadest in the industry. Our new flagship Scorpio X Series 320‑lane has been purpose built to maximize AI economics by leveraging hardware‑accelerated hypercast and in‑network compute engines to boost collective operations by up to 2x. In‑network compute offloads critical accelerator‑to‑accelerator communication and computation directly onto the switch, dramatically reducing the networking overhead during large‑scale training and inference.

These hardware capabilities are delivered through enhancements to our Cosmos software which can now integrate deeper into our customer software stacks, providing not only diagnostics and telemetry, but also directly improving AI platform performance. Core features’ advanced hardware and software capabilities are a result of Astera Labs, Inc. Common Stock’s deep system‑level understanding of AI architectures and close customer collaborations, creating a durable competitive moat. We are excited to report that we are now shipping initial volumes of our new 320‑lane Scorpio X, with production volumes ramping in 2026. Scorpio X Series also has a widening interest in design activity with hyperscalers, edge AI inference providers, and enterprise infrastructure builders to address high‑bandwidth AI clustering use cases.

Scorpio P Series continues to grow through 2026, and we expect initial shipments to at least two additional major hyperscalers towards the end of 2026, with broader deployment in 2027. On the optical front, we made good progress during the quarter as we continue to work through the qualification process at a large AI platform provider with our ultra‑high‑precision optical fiber coupler product, which we expect to ship in volume starting in 2027. We are actively expanding our volume manufacturing capabilities to support the ramp of both scale‑out and scale‑up TPO applications.

Beyond the early commercial traction of our merchant connectors, our high‑density fiber coupler technology will be a critical piece of our long‑term optical roadmap for NPO and CPO applications. Finally, our Leo memory controller is on track for an early ramp of CXL‑attached memory with Microsoft Azure M‑series virtual machines, and during the quarter, we captured a new custom design win for a KV cache–oriented application, with shipments expected in 2027. As we look to 2026, robust demand reflects secular AI infrastructure spending, deep customer partnerships, and expansion towards higher‑value solutions within our portfolio. This trend is quickly increasing our silicon dollar content opportunity beyond $1,000 per XPU and positions Astera Labs, Inc.

Common Stock to outperform our end‑market growth rates. As a result, we expect strong revenue growth to continue through 2026 and into 2027, driven by the proliferation of AI fabrics and the industry's transition to PCIe 6, 800‑gig, and 1.6T Ethernet connectivity. Based on the momentum we are seeing in 2026, we are strategically investing to drive strong continued growth. Our acquisition of XScale Photonics has created immediate design opportunities and our design center is fully integrated and working with customers on new programs. We have expanded our product portfolio, increased dollar content per accelerator, and diversified our customer base with additional design‑ins.

We are making progress within large market opportunities including optical engines and interconnects, UALink fabrics, and custom solutions for NVLink and AI inferencing. Most of all, I am proud of the stellar team we have built through worldwide hiring and thoughtful acquisitions, the progress we have made, and the results we are delivering together. With that, let me turn the call over to our President and COO, Sanjay Gajendra, to outline our vision for growth over the next several years.

Sanjay Gajendra: Thanks, Jitendra, and good afternoon, everyone. Today, I will provide an update on our recent execution followed by an overview of the meaningful market opportunities that will fuel Astera Labs, Inc. Common Stock’s growth over the next several years. Astera Labs, Inc. Common Stock’s mission is to deliver a purpose‑built intelligent connectivity platform with a portfolio of standard, custom, and platform‑level solutions across copper and optical interconnects for rack‑scale AI infrastructure deployments. As AI deployments advance to production at scale and operational efficiency, infrastructure teams face a new set of constraints—multitrillion‑parameter models, agentic workflows, multistep reasoning distributed across heterogeneous compute infrastructure, to name a few.

The industry needs connectivity and solutions purpose built to address these workloads: higher radix to simplify topologies, intelligent fabric capabilities to reduce communication overhead, open and platform‑specific optimization, and data‑center‑grade diagnostics to maintain uptime when a single fault can cost millions of dollars in idle compute. Let me now walk through our approach to address these evolving needs and our future strategy. Starting with our standard products, we continue to see strong momentum across both AI fabric and signal conditioning portfolios. We strengthened our mission‑critical position with the introduction of our flagship Scorpio X Series 320‑lane scale‑up fabric switch and the overall expansion of our Scorpio switch portfolio.

The Scorpio X Series 320‑lane high‑radix AI fabric switch replaces multiple legacy switches to enable large scale‑up cluster sizes in a single hop and reduces overall latency. Several new features such as in‑network compute reduce time‑to‑first‑token and tokens‑per‑watt performance. The newly expanded Scorpio P Series PCIe switch portfolio now spans from 32 lanes to 320 lanes to enable diverse accelerator optionality and system topologies. Our AI fabric portfolio is poised to expand further into 2027 with the introduction of UALink‑based products for AI scale‑up platforms. In early April, the UALink Consortium published a new specification which defines in‑network compute, chiplets, manageability, and 200G performance.

UALink 2.0 delivers these advancements with an open, vendor‑neutral approach and confirms that scale‑up switching is not simply hardware, but an AI‑aware fabric actively helping the system compute and drive performance. This evolution plays into Astera Labs, Inc. Common Stock’s strengths, as demonstrated by the industry‑leading feature set that is being deployed through our Scorpio portfolio expansion today. The maturity of the ecosystem is also accelerating, with OEMs and suppliers working tightly to deploy initial programs in 2027. On the signal conditioning portfolio, our Aries products will expand to support PCIe 7 and our Torus portfolio into 1.6T Ethernet, positioning us at the forefront of the next connectivity upgrade cycle. Turning to our optical business, Astera Labs, Inc.

Common Stock’s signal connectivity business is driven by the rapid shift of AI systems towards rack‑scale architectures and higher compute capabilities where scaling performance increasingly depends on high‑bandwidth, high‑radix, low‑latency interconnects. These requirements will expand our AI connectivity opportunities across both copper and optical interconnects. Astera Labs, Inc. Common Stock is well positioned to lead this transition by extending its proven value‑chain approach from copper into optics. Over the past couple of years, we have been systematically investing to broaden our internal capabilities across advanced analog and mixed‑signal design, DSP, electronic ICs, photonic ICs, and optical packaging capability, while also deepening our supply‑chain relationships. Together, these capabilities will enable high‑volume deployment of a complete scale‑up optical engine.

We are focused on three areas pertaining to scale‑up optics: 1) high‑density detachable, reflowable fiber‑attach solutions using the core technology from our XScale acquisition—we expect to ship these connectors in volume starting in 2027; 2) chipsets in support of NPO that will enable multi‑rack AI clusters starting in 2027; and 3) eventually fully optically enabled Scorpio X fabric switches with CPO supporting larger domains, higher egress densities, and bandwidth. Next, let me talk about our custom solutions business that also continues to make meaningful progress as we work to develop new products and close on new designs.

Once again, tight collaboration with hyperscaler customers coupled with a diverse set of foundational technology and operational capabilities have been essential to our initial success. These opportunities represent a new multibillion‑dollar market opportunity for Astera Labs, Inc. Common Stock. First, we are engaging with multiple customers to enable NVIDIA NVLink Fusion’s scale‑up architecture for hybrid racks. Our strong historical execution delivering intelligent connectivity solutions for NVIDIA‑based systems positions us well to develop and design within these new custom programs. Second, we are seeing new custom solution opportunities within the memory space for KV cache applications.

We are happy to report that we have won a new design leveraging a customized version of our Leo CXL controller to maximize performance within these AI use cases. Overall, we are pleased with the initial traction we have seen on the custom solutions front and have conviction that this opportunity set will continue to broaden and become a meaningful business for Astera Labs, Inc. Common Stock over the next few years. Finally, we continue to demonstrate solid momentum with our platform business as we ultimately look to expand beyond add‑in cards and smart cable modules to enable broader rack‑scale solutions for customers.

As we have grown from an I/O component supplier to an AI fabric solution provider over the past couple of years, customers are looking for Astera Labs, Inc. Common Stock to bring additional value to the AI rack at the system level. In conclusion, Astera Labs, Inc. Common Stock is at a key inflection point in the company's journey as we begin to ship production volumes of our scale‑up AI fabrics. We are also making great strides towards broadening our business across new product categories including optical and custom solutions as our partners look for us to deliver more value in next‑generation systems. Therefore, we will continue to strategically and thoughtfully invest as we position Astera Labs, Inc.

Common Stock to deliver growth rates above our end‑market benchmarks over the long term. With that, I will turn the call over to our CFO, Desmond Lynch, who will discuss our Q1 financial results and our Q2 outlook.

Desmond Lynch: Thank you, Sanjay, and good afternoon, everyone. I am pleased to be joining you today for my first earnings call as CFO of Astera Labs, Inc. Common Stock. I look forward to partnering with Jitendra, Sanjay, and the rest of the leadership team as we continue to drive long‑term value for our shareholders. Today, I will begin by reviewing our Q1 financial results and will then discuss our Q2 guidance, both presented on a non‑GAAP basis. Revenue in Q1 2026 was $308.4 million, up 14% versus the previous quarter and up 93% year over year. We saw revenue growth across our signal conditioning and switch fabric portfolios, supporting both scale‑up and scale‑out connectivity for AI fabric and reach‑extension applications.

Our Scorpio product family performed well in Q1, driven by strong demand for PCIe Gen 6 switching applications and continued expansion of designs across various platforms. During the quarter, Scorpio X Series products began shipping in initial production volumes. Looking ahead, we expect Scorpio X Series shipments to increase in Q2 along with initial shipments of our new Scorpio X 320‑lane and then ramp to full volume production in 2026. Aries revenue grew on strong early adoption of our PCIe 6 solutions for both scale‑out and scale‑up signal conditioning. In total, PCIe Gen 6 revenue across AI fabric and signal conditioning contributed more than one‑third of total company revenue in the quarter.

Torus also delivered solid results driven by broad adoption of AEC to extend reach in both AI and general‑purpose compute platforms. Non‑GAAP gross margin for the first quarter was 76.4%, up 70 basis points sequentially, primarily driven by a lower mix of hardware sales across our signal conditioning portfolio. Non‑GAAP operating expenses for the first quarter were $123.9 million, reflecting continued R&D investment to support our expanding product roadmap, including a full quarter of our XScale acquisition and a partial quarter of our newly formed Israel Design Center. Within Q1 non‑GAAP operating expenses, R&D expenses were $96.2 million, sales and marketing expenses were $12 million, and general and administrative expenses were $15.7 million.

Non‑GAAP operating margin for the first quarter was 36.2%. We will continue to invest strategically to drive above‑industry revenue growth over the long term while maintaining strong and durable profitability. For the first quarter, interest income was $11.6 million, our non‑GAAP tax rate was 11%, and non‑GAAP fully diluted shares outstanding were 181.2 million shares. Non‑GAAP diluted earnings per share for the quarter were $0.61. We ended the quarter with cash, cash equivalents, and marketable securities totaling $1.18 billion, flat versus Q4, as cash from operations of $74.6 million was offset by cash paid for acquisitions.

Now turning to our outlook for the second quarter, we expect revenue to be between $355 million and $365 million, up 15% to 18% sequentially, driven by continued strength across our AI fabric and signal conditioning portfolios. Aries revenue growth is expected to be driven by continued strong adoption of PCIe 6 across AI platforms, supporting both scale‑up and scale‑out connectivity. Torus growth is expected to be driven by increased volumes for AI scale‑out connectivity. And in AI fabric, we expect robust growth driven by the continued early‑stage ramp of our Scorpio X Series products for large‑scale XPU clustering applications as well as continued growth in our PCIe solutions in customized GPU platforms.

We expect second‑quarter non‑GAAP gross margin to be approximately 73%. This outlook includes an estimated 200 basis point non‑cash impact related to a recently executed one‑time agreement with one of our customers. We expect second‑quarter non‑GAAP operating expenses to be between $128 million and $131 million. Interest income is expected to be approximately $11 million and we expect a non‑GAAP tax rate to be approximately 12%. We expect our Q2 share count to be 184 million diluted shares outstanding. Overall, we are expecting non‑GAAP fully diluted earnings per share to be between $0.68 and $0.70. This concludes our prepared remarks, and once again, we appreciate everyone joining the call.

I will now turn the call back to our operator to begin Q&A. Operator?

Operator: Thank you. At this time, I would like to remind everyone in order to ask a question, press star then the number one on your telephone keypad. We ask that you please limit yourself to one question to allow everyone an opportunity to ask a question. If time permits, we may queue again for follow‑up questions. We will now open the call for questions. We will take our first question from Harlan Sur at JPMorgan.

Harlan Sur: Good afternoon. Thanks for taking my questions, and great job on the execution by the team. Now as your customers build compute workload inflection from training to inference in the second half of last year, essentially very focused now on monetization, we saw that as inferencing workflows evolved—one‑shot to reasoning to knowledge and tech—this created new silicon opportunities. It created new storage tiers. It created more demand for high‑performance CPUs. Obviously, storage and CPUs communicate via PCIe, so right in the sweet spot of your technology and product leadership—that is one example.

Your CXL solutions targeted at KV cache applications may be another example, but can you help us understand how the transition to more inferencing‑based workloads, especially agentic‑based workloads, has potentially helped to create new opportunities for the team and potentially expand your SAM opportunity?

Jitendra Mohan: Harlan, thank you. You point out very correctly that inferencing has created a lot of focus in the industry and a lot of additional opportunities. The good news is that at Astera Labs, Inc. Common Stock, we have been focused on these AI applications from the start. We helped the training workloads when the training workloads were still the mainstream. We are helping the inferencing workloads equally well. The KV cache offload is a great opportunity where we mentioned earlier that we picked up a new design for a custom application for KV cache offload. That is really a key part of AI inferencing.

I also want to draw your attention to the newly introduced Scorpio X 320‑lane family that supports in‑network compute and hypercast. Both of these are extremely important technologies to reduce the networking overhead and deliver additional performance for training as well as inferencing. And not only that, we enable these hardware‑accelerated modes through our Cosmos software which now not only gives our customers the ability to do diagnostics and telemetry, but allows them to uniquely improve the performance of their system for their inferencing workload using these unique capabilities that we have worked in tight collaboration with our customers.

Operator: We will move to our next question from Blayne Curtis at Jefferies.

Blayne Curtis: Hey, guys. Good afternoon, and I will echo the congrats on the nice results. Maybe you can, in terms of the Scorpio ramp—I know last quarter you talked about it being 20% of revenue. It is a big ramp. I am assuming that is the biggest driver into June. I was wondering if you can kind of frame just how big that is. And then I am curious, particularly this 320‑lane product that is ramping—what are the milestones, and what is left to do? You have sampled it, but to get that to production in an AI server, I am just kind of curious what is left there.

Desmond Lynch: Hi, Blayne. It is Des. Thanks for your question. We have been very pleased with the performance of our Scorpio product family. It has certainly been a large driver for growth in the first half of the year. We continue to expect to see Scorpio P continuing to ramp driven by scale‑out opportunities. And then Scorpio X—this is really a greenfield opportunity for us associated with scale‑up connectivity. The small solutions are ramping today, and we do expect to see the layering in of the high‑radix configurations in the second half of the year.

Given the size of the opportunity and the associated dollar content, we would expect to see that Scorpio will become our largest product line by the end of the year, which is strong performance for the product line that was only a small percent of total company revenue last year. And as we go throughout the year, I would expect to see X Series revenue exceeding P Series. But overall, we are very pleased with the performance of the Scorpio product family and the outlook of the business. Then into your second point about other milestones—

Jitendra Mohan: We are already shipping, as Des mentioned, the newly introduced Scorpio X family, and you will be able to see and touch and feel this at Computex where we will be demonstrating this live in our booth.

Operator: We will move next to Joe Moore at Morgan Stanley.

Joe Moore: Great. Thank you. You talked quite a bit about your optical strategy. Can you talk about the timeframe where you see optical scale‑up becoming more relevant? And do you have the building blocks that you need to progress from copper to optical in that space, or do you need tuck‑in type technologies, and do you need to invest a lot more? Just a general sense of what it is going to take to transition from copper to optical over the next several years.

Sanjay Gajendra: Thanks for the question. We have been working for the last couple of years building all the foundational things that are required for optical enablement—all of the mixed signal that is required, all of the electronic ICs, as well as we did the acquisition with XScale that brought in the pluggable connector as well as the PIC technology. In general, I want to say we have made tremendous progress in preparation for the optical opportunities that are coming up on us. For us, in terms of timeline, what we believe is that the NPO‑based opportunities—or the near‑package optics—would be the first one to ramp, and that will start happening in 2027.

We will also be ramping our pluggable connector technologies for AEC, mostly for scale‑out, next year, 2027, with more of the main deployments for CPO happening in the 2028 timeframe. So in general, for us, between the components that we are building that go inside the NPO, the detachable connector technology for folks that have their own CPO solutions, as well as our own Scorpio X devices that will come in to support both NPO variants and CPO variants, we believe it is all coming together nicely for us.

One key consideration, of course, that we have been working is the supply chain and getting all of the commitments in place so that we can not only provide the technology that is required for NPO and CPO, but also make sure that we are able to ship to revenue. Overall, there is quite a bit of work and progress that we have done enabling us to start ramping in 2027.

Operator: We will take our next question from Ross Seymore at Deutsche Bank.

Ross Seymore: Congrats on the strong results and guide. I just want to talk about a small part of your business today, but something that sounds like it could grow a little faster than we thought before, and that is specifically your Leo product line. Given the dominance or resurgence of the CPU demand and memory being such a large cost and bottleneck these days, how has the demand trajectory and growth potential changed in your view—your ability to do the pooling and the sharing and the memory side in CXL in general?

Jitendra Mohan: We are definitely seeing increased traction for CXL, not only for the general‑purpose compute applications where we started, but also for inferencing as we touched upon earlier. Staying with general‑purpose compute first, we are seeing additional demand from our customers. We are on track for deploying this with Microsoft Azure for their M‑series instances at the data center. That is in private beta now, expected to go into general availability end of the year. We see additional customers also following suit for this particular high‑memory‑type application. In addition, we are also excited by the new KV cache offload or AI inferencing opportunities. Some of our customers have already designed us in.

In fact, we picked up our second design win—a custom application for CXL—earlier this quarter. We are working with our customer, which is an additional new hyperscaler, on at‑scale performance tests and expect that one to ship revenue in 2027.

Operator: We will go next to Tore Svanberg at Stifel.

Tore Svanberg: Yes, thank you. Congrats on the record quarter, and Des, welcome on board. I wanted to follow up on what you said about Scorpio mix as we approach the end of the year, especially in relation to Aries. Because obviously Aries is now ramping in PCIe Gen 6. Next year, obviously, there is going to be a lot of mixed networking topologies. So I understand Scorpio will be the biggest product by the end of the year. How should we think about 2027 between Aries and Scorpio? Because there are significant drivers for both.

Desmond Lynch: Hey, Tore. Thanks for the question. Yes, we have been very pleased with the growth rate of our Scorpio product family, as I mentioned earlier—really excited about the continued growth opportunity ahead of us. That said, we still expect to see strong growth within the Aries product line. We expect to continue to grow our leadership position there. We expect to see strong growth given the PCIe 6 portfolio. It is just the fact that Scorpio will continue to be our largest and fastest‑growing business within the company.

Operator: Next, we will move to Ananda Baruah at Loop Capital.

Ananda Baruah: Yeah, good afternoon, guys. Thanks for taking the questions, and congrats on the great execution here. I guess the question would be, what is a good way—particularly with all the additional context you have given around Scorpio X and Scorpio P lanes progressing through the back half of ’26—as we move forward post ’26, and clusters get bigger, and presumably high‑radix switches have more ports, should we expect Scorpio X and Scorpio P switches to continue to increase the lane count? And if so, is there any useful anecdotal way to think about how that may occur? Should we just think that can continue in some perpetuity?

Jitendra Mohan: Thanks for the question. We can talk for an hour just on that topic, but let me say this. The AI fabric switches have become a very important part of our overall strategy, and we are investing heavily not only in the current generation that we have announced, but also upcoming devices. We are going to continue to focus on PCI Express because that is a large part of the business today, but we are also working on UALink products that will form the basis of the next generation of these devices.

In terms of the lane count, we work very closely with our customers to understand what their deployment profile is going to look like because it is really important to target the right lane counts and rate for these devices. If you do not, then the cluster sizes get limited, and if you over‑index, then you come up with a solution that is not competitive. Fortunately, we have very good partnerships with our customers and they are telling us what the deployment looks like. I also want to add that as the cluster sizes increase, it is not only important to have a switch; it is also important to have the right media types for the deployment.

So for our family of switches, we will continue to support copper connectivity as we have so far. As Sanjay mentioned earlier, increasingly we will enable optical connectivity as well, starting with NPO with the next generation of switches and then going to CPO. As a switch company, it gives us a perfect opportunity to deploy optical solutions, and that is something that we will completely leverage to make sure that we have end‑to‑end connectivity with our switches, including copper, NPO, and CPO.

Operator: Take our next question from Natalia Winkler at UBS.

Natalia Winkler: Thank you for taking my question, and congratulations on the results. I was wondering if you can add a little bit more color on the NVLink Fusion opportunity for you guys. Specifically, how do you see it from the standpoint of portfolio—where it would be most interesting for you—and also from the standpoint of the competitive landscape given some of the partnerships that NVIDIA has for NVLink Fusion as well.

Sanjay Gajendra: Thanks for the question. In general, if you look at our business, you can broadly divide that into three categories: standard products, custom solutions, and the module/solution business. Clearly, an area that we see tremendous opportunity for us going forward is the custom solutions under which we are developing the NVLink Fusion–type devices. This is proving to be pretty interesting. We have several very deep engagements for an initial design win in collaboration with NVIDIA and a hyperscaler. That project is going well, and we do expect that to start contributing revenue in 2027, as some of the GPUs that are designed for this kind of use case—which is called a hybrid rack situation—come to market.

In a hybrid rack, the GPU or the XPU still talks native protocols, which could be protocols like PCIe or UALink and others, but when they need to leverage and cross over and talk to an NVLink‑type ecosystem, then they would need a product that is based on NVLink Fusion that we are developing. In short, we are very deep in engagement from a silicon development standpoint, so we do expect that this will start providing some meaningful revenue in 2027 and then grow from there. On the competitive situation, this is an ecosystem that NVIDIA is creating with NVLink Fusion.

There are others, but for us, the main thing is that we have been engaged with real customers and real applications, and to that end, we will continue to focus on that and do what we need to do, and not get distracted by any competitive press releases.

Operator: We will go to our next question from Sebastien Cyrus Naji at William Blair.

Sebastien Cyrus Naji: Congrats on strong results. My question is on the Scorpio business and maybe a little bit of a follow‑up to one of the prior questions. With your announcement of the new 320‑lane Scorpio switches for both the X and P Series, how should we be thinking about ASPs for the higher‑radix solutions? Is it right to think that your dollar content is correlated directly to the lane count, or is there another way to think about your dollar content? Any details there?

Sanjay Gajendra: In general, the bigger the switch, the higher the ASP—that is the way the industry works. But also please keep in mind that these switches are more like AI fabric‑class devices, which are a lot more than just the number of lanes. We talked about in‑network compute, we talked about hypercast, and we talked about several features that we have that are unique and critical for deploying AI clusters—whether for training or, more and more, for inference applications where things like latency become super important. So when it comes to ASPs, it is a combination of what features are enabled and not just based on lane count.

We do see our content continue to increase, and to that end we are expecting—and going forward with the design wins we have—over $1,000 worth of content per accelerator, and that is significant and growing rapidly for us. Considering the path that we have taken so far—from offering retimers to now offering complete AI fabric, and with the future products with optically enabled switches and so on—you can imagine that this content would grow from a dollars‑per‑accelerator standpoint.

Operator: We will go next to Quinn Bolton at Needham.

Quinn Bolton: Hey, guys. Let me offer my congratulations as well. You mentioned the KV cache offload custom design. I am wondering if you might be able to put any sort of numbers around it in terms of dollar content per CPU or dollar content per gigabyte or terabyte of memory that is attached. Is there a way we can think about how to size that opportunity?

Sanjay Gajendra: These are going into new inference applications. There are multiple use cases and platforms that we see for this. In that context, this would be a significant opportunity for us to execute and deliver on. In terms of exact dollar association, it is probably a little bit early because some of the platforms and architectures are being finalized. But in general, for us, inference and KV cache is a significant opportunity. We have the IP not just for memory, but for things like KV cache acceleration as part of our portfolio right now. We will increasingly develop products that provide more function and capability to ensure that memory is available for KV cache use cases.

I will also say that the ASPs will continue to be pretty meaningful when you think about the cost of the memory. In other words, these controllers will always fade compared to the amount of money that people are paying for the memory itself. So these are not ASP‑challenged, and we will continue to make sure that we extract the most value out of these products.

Operator: We will move to our next question from Karl Ackerman at BNP Paribas.

Analyst: Hi, this is Sam Feldman on for Karl Ackerman. Thanks for taking my question. You mentioned near‑package optics as a solution to CPO. From Astera Labs, Inc. Common Stock’s point of view, do you believe customers view XPO as a viable option to extending pluggable optics? And does Astera Labs, Inc. Common Stock plan to participate in the XPO MSA?

Jitendra Mohan: That is a great question. We work very closely with our customers to understand what solutions they are looking for. XPO is a pluggable technology that has come about recently, and we will certainly participate in that. But not all of our customers at the moment are looking to intercept XPO. The customers that are looking to intercept with NPO, we will certainly support them because it gives you a way to have very high egress density without the limitations of front‑plate density. The customers that want us to work directly on CPO—we absolutely will work with them. As Sanjay mentioned earlier, we are engaged in that opportunity. That should ship in 2027.

And for customers that are looking to do XPO, we will engage with them as well. Right now, our focus has been on NPO and CPO so far.

Operator: We will take our next question from Suji Desilva at ROTH Capital.

Suji Desilva: Hi, Jitendra, Sanjay, and welcome, Des. Just a bigger‑picture question. You mentioned the word “custom” quite a bit on this call—more than in the past. When you first got going, Hopper was there and Aries was fairly standard. Are we past the point, or evolving to the point, where standard products are not as applicable because each platform is different? Should we think all products having some customization, or where is the line there?

Sanjay Gajendra: I am glad you asked the question. If you think about infrastructure and AI use cases, they all are unique between platforms and between customers. Having said that, if you look at the software‑defined architecture we have with our products—even our standard products like Aries, Torus, Scorpio, and so on—they provide a ton of customization that customers leverage through the Cosmos interface. Cosmos allows them to not only monitor, but also customize, and now with the new devices we announced today, they can do a lot more from a performance and key offload feature‑enablement standpoint. So customization has been our story through software‑defined architecture and offered through our standard products.

But when we talk about our business, the business model is different. We are developing a product for a given customer under a business model that includes NREs and other ways of paying for the development and, of course, the product revenue that comes when the product starts shipping. As we are getting into bigger devices—whether it is for fabric‑class or other connectivity technology that goes beyond what we have done so far—having the custom solution portfolio is important. We are approaching that with our customers by also offering a variety of foundational technology that we have been building for the last couple of years. We see custom being an important growth driver for us.

At the same time, please think about our business in a way where the standard products continue to be a very important part of our overall portfolio. We will do custom, but we will be very systematic about it. We will not take any opportunity that comes our way because sometimes the custom business can be so unique to one customer, with a lot of risk and margin implications. We will be systematic and thoughtful about the opportunities that we pursue on the custom side.

Operator: We will go next to Mehdi Hosseini at Susquehanna.

Analyst: Hi, this is Bashan filling in for Mehdi. Congrats on the quarter, and welcome, Des. I wanted to follow up on UALink. Can you share an update on the adoption process and the timeline for UALink‑based switches? And what do you expect the dollar content to be? How should we think about the difference between PCIe switching pricing and the UALink pricing?

Jitendra Mohan: Within the last three months or so, we have had a couple of announcements from our hyperscaler customers on what the intercept is. Both Amazon as well as AMD have said that their ASIC and GPU will launch sometime in 2027, and we will certainly be prepared to intercept that launch with our UALink switch. In terms of the comparison of a UALink switch to PCI Express, a couple of things to state: as we go into this new generation of devices, both the complexity as well as the speed of these devices is going up—sometimes in lane count, other times in radix.

The value that we are able to charge for these devices will be substantially higher than what we are able to do for PCI Express switches. The media attach also tends to change. We may go from a majority copper PCIe to a blend of copper and NPO with the next‑generation switches. That also gives us a meaningfully large opportunity in terms of revenue and the TAM that we are able to address, finally leading up to CPO, which is a really rich opportunity with a very large TAM that we are able to address, all because we have the platform in the form of Scorpio X switches.

Operator: We will move next to Tore Svanberg at Stifel for a quick follow‑up on capacity.

Tore Svanberg: Yes, just a quick follow‑up on capacity. Your inventory days, I think, came in at 75 days—

Desmond Lynch: Hi, Tore. It is Des here. Based upon our current view of demand, we do have supply in place through the end of the year, and we are very comfortable with what our inventory holdings are here. Like others within the industry, we continue to see pockets of supply challenges, but what we have done is really a nice job of diversifying our backend supply chain, and we have been able to make sure that we have sufficient supply in place to meet the revenue commitments. So no concerns just now, and we continue to work with our supply chain partners for supply going into 2027.

Operator: And that concludes the question and answer session. I will turn the call back over to Leslie Green for closing remarks.

Leslie Green: Thank you, Audra, and thank you, everyone, for your participation and questions. Please do refer to our Investor Relations website for information regarding upcoming financial conferences and events. Thanks so much.

Operator: And this concludes today’s conference call. Thank you for your participation. You may now disconnect.

Should you buy stock in Astera Labs right now?

Before you buy stock in Astera Labs, consider this:

The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Astera Labs wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.

Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you’d have $490,864!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you’d have $1,216,789!*

Now, it’s worth noting Stock Advisor’s total average return is 963% — a market-crushing outperformance compared to 201% for the S&P 500. Don't miss the latest top 10 list, available with Stock Advisor, and join an investing community built by individual investors for individual investors.

See the 10 stocks »

*Stock Advisor returns as of May 5, 2026.

This article is a transcript of this conference call produced for The Motley Fool. While we strive for our Foolish Best, there may be errors, omissions, or inaccuracies in this transcript. Parts of this article were created using Large Language Models (LLMs) based on The Motley Fool's insights and investing approach. It has been reviewed by our AI quality control systems. Since LLMs cannot (currently) own stocks, it has no positions in any of the stocks mentioned. As with all our articles, The Motley Fool does not assume any responsibility for your use of this content, and we strongly encourage you to do your own research, including listening to the call yourself and reading the company's SEC filings. Please see our Terms and Conditions for additional details, including our Obligatory Capitalized Disclaimers of Liability.

The Motley Fool recommends Astera Labs. The Motley Fool has a disclosure policy.

Disclaimer: For information purposes only. Past performance is not indicative of future results.
goTop
quote