Rambus (RMBS) Q1 2026 Earnings Transcript

Source The Motley Fool
Logo of jester cap with thought bubble.

Image source: The Motley Fool.

Date

April 27, 2026

Call participants

  • Chief Executive Officer — Luc Seraphin
  • Interim Chief Financial Officer — John Allen

Takeaways

  • Total Revenue -- $180.2 million, matching guidance, with all business units contributing.
  • Product Revenue -- $88 million, reflecting 15% year-over-year growth, primarily driven by DDR5 product strength.
  • Royalty Revenue -- $69.6 million, versus licensing billings of $70.8 million, due to timing differences in revenue recognition and billing.
  • Contract and Other Revenue -- $22.6 million, mainly generated by silicon IP activities.
  • Operating Costs -- $104.6 million, inclusive of cost of goods sold; operating expenses at $69.9 million, increasing sequentially due to payroll-related taxes on equity vesting.
  • Non-GAAP Net Income -- $69.3 million, applying a 16% flat tax rate for non-GAAP pretax income.
  • Operating Cash Flow -- $83 million, resulting in an increase of $24 million in cash, cash equivalents, and marketable securities to a balance of $786 million.
  • Free Cash Flow -- $66.3 million, after $17 million in capital expenditures and $38 million in taxes on equity vesting.
  • Inventory -- $14 million increase during the quarter, with management indicating continued strategic inventory build to support growth and mitigate supply chain risks.
  • Q2 2026 Revenue Guidance -- $192 million to $198 million; product revenue guidance at $95 million to $101 million, reflecting a sequential midpoint increase of 11% over Q1.
  • Q2 2026 Earnings Per Share Guidance -- Non-GAAP EPS guided between $0.65 and $0.73.
  • New Product Launch -- Announced chipset for JEDEC-standard LPDDR5X SOCAMM2 server modules, introducing new voltage regulators and SPD Hub; expected minimal financial impact in 2026 due to low volumes and content.
  • Silicon IP Momentum -- Continued Tier 1 customer design wins and portfolio expansion, including launch of industry’s fastest HBM4E controller and new Ultra Ethernet network security engine.
  • Companion Chips Revenue -- “Low double-digit percent of our total product revenue during the first quarter,” with expectations for a similar or slightly higher percentage contribution in Q2.
  • MRDIMM Outlook -- Management values current serviceable addressable market (SAM) at approximately $600 million, with ramp timing linked to platform launches from Intel and AMD, and material revenue expected in 2027 and beyond.
  • Market Share Position -- Exited 2025 with “mid-40% share”; no indication of share erosion into 2026.

Need a quote from a Motley Fool analyst? Email pr@fool.com

Risks

  • Luc Seraphin said,“we're watching the situation with supply, especially on the back end. Certainly, since last quarter, the situation has not improved. We're working with our suppliers, but the lead times are long, and there is tension on the back end.”
  • Management indicated that supply chain tightness and platform launch timing could constrain product revenue growth, noting “we don't see the situation as materially different than what we saw in Q1. But from a supply standpoint, things have not improved.”

Summary

Rambus (NASDAQ:RMBS) reported results that met internal guidance, driven by 15% year-over-year product revenue growth and continued strength in its IP and licensing businesses. Inventories were strategically increased to address supply chain constraints forecasted to persist through 2027, and management highlighted in-process normalization following prior operational disruptions. While the recently introduced LPDDR5X SOCAMM2 chipset is considered strategically important, its immediate revenue impact will be negligible in 2026, with expectations for significant contributions only as next-generation architectures mature. The company reiterated a sequential revenue growth outlook into the second half, with new companion chip products and expanding silicon IP traction among hyperscalers cited as supportive factors.

  • Management expects MRDIMM revenue to scale with forthcoming Intel and AMD platform launches, anticipating initial volumes late 2026 and broader ramp in 2027.
  • Increasing CPU-to-GPU ratios and the expansion of heterogeneous memory architectures in AI data centers may amplify demand across the company’s DDR5, MRDIMM, and LPDDR product lines.
  • Share gains realized in the past year are projected to continue, with no sign of erosion as the market transitions to new DDR5 generations.
  • Tier 1 customer wins for silicon IP—including PCIe retimer, switch IP, and HBM4E controller—drive expectations for 10%-15% annual growth in that segment.
  • Strong cash flow and balance sheet flexibility are enabling proactive inventory strategies to capture demand and manage supply risk.

Industry glossary

  • MRDIMM: Multiplexer Rank Dual In-line Memory Module—a high-capacity, high-bandwidth server memory module supporting next-generation data center performance requirements.
  • SOCAMM2: Small Outline Compression Attached Memory Module 2—a form factor enabling LPDDR-based memory for server-class applications, featuring compact architecture and integrated voltage regulation.
  • RCD: Registering Clock Driver—a chip used in server DIMMs to buffer command and address signals for improved performance and scalability.
  • HBM4E: High Bandwidth Memory Fourth Generation, Extended—a high-speed memory interface standard optimized for AI accelerators and advanced compute workloads.
  • SPD Hub: Serial Presence Detect Hub—an integrated component on memory modules that provides module configuration and health information to the system.

Full Conference Call Transcript

John Allen: Thank you, operator, and welcome to the Rambus First Quarter 2026 Results Conference Call. I am John Allen, Interim Chief Financial Officer at Rambus. And on the call with me today is Luc Seraphin, our CEO. The press release for the results that we will be discussing today has been filed with the SEC on Form 8-K. We are webcasting this call along with the slides that we will reference during portions of today's call. A replay of this call can be accessed on our website beginning today at 5:00 p.m. Pacific Time.

Our discussion today will contain forward-looking statements, including our expectations regarding projected financial results, financial prospects, market growth, demand for our solutions, other market factors, including reflections of the geopolitical and macroeconomic environment and the effects of ASC 606 and reported revenue, among other items. These statements are subject to risks and uncertainties that may be discussed during this call and are more fully described in the documents we file with the SEC, including our 8-Ks, 10-Qs and 10-Ks. These forward-looking statements may differ materially from our actual results, and we are under no obligation to update these statements.

In an effort to provide greater clarity in the financials, we are using both GAAP and non-GAAP financial presentations in both our press release and on this call. A reconciliation of these non-GAAP financials to the most directly comparable GAAP measures has been included in our press release, in our slide presentation and on our website at rambus.com on the Investor Relations page under Financial Releases. In addition, we will continue to provide operational metrics such as licensing billings to give our investors better insight into our operational performance. The order of our call today will be as follows. Luc will start with an overview of the business.

I will discuss our financial results, and then we will end with Q&A. I will now turn the call over to Luc to provide an overview of the quarter. Luc?

Luc Seraphin: Good afternoon, everyone, and thank you for joining us. We opened 2026 with a strong first quarter, meeting our financial targets and broadening our portfolio to address the accelerating demands of AI. The quarter reflects solid momentum as we execute against our road map to support long-term profitable growth for the company. This is an exciting time for Rambus, and we are well positioned to capitalize on the market trends in the data center and AI. For decades, we have developed foundational technologies and solutions across a wide range of memory and interconnects. That heritage positions us well as systems become more diverse, memory dependent and performance-driven.

To give more context, there are several market and technology trends playing out across the data center and AI that continue to work in our favor. As AI adoption accelerates and inference use cases expand, workloads are becoming more persistent and context-rich and performance is increasingly defined by how efficiently data can be stored, accessed, moved and secured. To support these workloads, AI infrastructure is becoming more complex and heterogeneous, combining a mix of traditional and AI server platforms to support orchestration, data management and real-time execution at scale.

At the same time, the expansion of inference and particularly agentic AI with continuous reasoning and multistep workflows is driving more always-on activity and placing even greater demands on memory capacity, bandwidth, latency and power efficiency. Together, these trends are driving new memory and connectivity architectures to support purpose-built solutions across a wider range of use cases and form factors. This increases our opportunities for richer content and broader adoption of our industry-leading IP, reinforcing our position for sustainable long-term growth. Now let me turn to our quarterly results, starting with our chip business. Our performance reflects strong execution and ongoing leadership in our core DDR5 RCD chips.

We delivered product revenue of $88 million in Q1, in line with our guidance and up 15% year-over-year. Looking ahead, we expect to deliver double-digit product revenue growth in the second quarter. We continue to see increasing customer adoption of new products and remain well positioned to support the ramp of next-generation platforms as they enter the market. We continue to execute on our strategy of delivering comprehensive industry-leading chip solutions to address growing customer and market requirements. As I mentioned in my opening remarks, we recently expanded our product portfolio with the introduction of our chipset for JEDEC-standard LPDDR5X SOCAMM2 modules, building on the same signal and power integrity expertise we have applied across multiple generations of DDR.

This chipset is the first offering in our road map of LPDDR-based server module solutions and includes new voltage regulators as well as the SPD Hub to support reliable, power-efficient server class operation. As part of that road map, we are actively working with industry partners on the definition and development of LPDDR6-based SOCAMM2 solutions, which would offer a natural upgrade path for future generation AI platforms. As AI server architectures diversify to address varying performance, power efficiency and form factor requirements, some platforms are now leveraging LPDDR-based memory. While LP memory offers attractive power characteristics, it was originally designed for mobile environments with very short signal paths and tight power margins, making reliable deployment in server systems inherently challenging.

The SOCAMM2 addresses these limitations through a compact CPU proximate module architecture with optimized signal routing and localized power management to enable LPDDR modules to operate in server environments. The Rambus SOCAMM2 chipset enables power-efficient, reliable operation of up to 9.6 gigabit per second in a compact module form factor. As LP-based server modules scale to higher speeds and bandwidth in future generations, they will require increasingly sophisticated interface power and control functionality. This progression is similar to what we have seen in DDR-based server modules and reinforces our opportunity to extend our road map of high-value chip content across memory types in the future.

As I mentioned previously, the ongoing expansion of AI is driving demand for a broader range of memory types and form factor. To meet these needs, we continue to build on our leadership solutions in DDR5, including chipsets for RDIMM and MRDIMM and selectively expand our road map of novel solutions as they begin to play a complementary role in heterogeneous systems. With active engagements across customers and ecosystem partners, we are helping shape next-generation server modules, reinforcing the opportunity for richer chip content and sustained growth. Turning now to silicon IP. We saw strong customer traction in the first quarter with continued design wins at Tier 1 companies and growing engagement across our portfolio.

We remain focused on delivering industry-leading premium IP that enables differentiated solutions for AI in the data center, including accelerators and networking chips across a wide range of architectures. There's increasing momentum for custom silicon in AI, especially among hyperscalers as they tailor hardware to their own software stacks and deployment needs, optimizing for performance, power efficiency and total cost at scale. This is driving an accelerating pace of design and expanding demand for value-added IP to support memory bandwidth, advanced connectivity and security. During the quarter, we saw growing traction for our value-added PCIe retimer and switch IP to support increasingly complex AI systems across scale-up and scale-out environments.

We also expanded our memory IP portfolio with the introduction of the industry's fastest HBM4E controller, setting a new benchmark for AI accelerator memory throughput. In addition, we launched a new network security engine designed for Ultra Ethernet to protect distributed AI clusters. All of these IP offerings are in great demand and further strengthen our position as a critical enabler of next-generation compute and connectivity solutions for AI infrastructure. In summary, we executed well in the first quarter. We delivered solid results and expanded our offerings for both chips and IP to extend our leadership in our core markets. As we look ahead, Rambus is well positioned to capitalize on the megatrends in data center and AI.

Our sustained technology leadership, disciplined execution and increasing traction across our portfolio of leadership products will continue to fuel our results. With that, we expect strong growth in 2026, and I'm confident in our long-term trajectory. As always, I want to thank our customers, partners and employees for their continued trust and support. Now I'll turn the call over to John to walk through the financials. John?

John Allen: Thank you, Luc. I'd like to begin with a summary of our financial results for the first quarter on Slide 3. We delivered first quarter revenue and earnings in line with our guidance with solid contributions from each of our diversified businesses. We also continued our strong track record of cash generation. This performance reflects the continued strength in our business model. Our strong balance sheet and disciplined capital allocation enable us to invest in growth initiatives while returning value to shareholders. Let me now provide you a summary of our non-GAAP income statement on Slide 5. Revenue for the first quarter was $180.2 million, which was in line with our expectations.

Royalty revenue was $69.6 million, while licensing billings were $70.8 million. The difference between licensing billings and royalty revenue mainly relates to timing as we do not always recognize revenue in the same quarter as we bill our customers. Product revenue was $88 million, representing 15% year-over-year growth, driven by continued strength in DDR5 products and ramping new project contributions. Contract and other revenue was $22.6 million, consisting predominantly of silicon IP. As a reminder, only a portion of our silicon IP revenue is reflected in contract and other revenue and the remaining portion is reported in royalty revenue as well as in licensing billings. Total operating costs, including cost of goods sold for the quarter were $104.6 million.

Operating expenses of $69.9 million were up sequentially due to seasonal payroll-related taxes in connection with equity vesting. Interest and other income for the quarter was $6.9 million. Using an assumed flat tax rate of 16% for non-GAAP pretax income, non-GAAP net income for the quarter was $69.3 million. Now let me turn to the balance sheet details on Slide 6. We ended the quarter with cash, cash equivalents and marketable securities totaling $786 million, up $24 million from Q4 2025 with strong operating cash of $83 million, partially offset by $38 million in taxes paid on equity vesting and $17 million in capital expenditures.

We increased our inventory balance by $14 million during the quarter and expect to continue building inventory strategically in the second quarter. Our strong balance sheet gives us the flexibility to increase inventory to support our product revenue growth and manage through potential supply chain constraints. First quarter depreciation expense was $8.5 million. Free cash flow in the quarter was $66.3 million. Let me now review our non-GAAP outlook for the second quarter on Slide 7. As a reminder, the forward-looking guidance reflects our best estimates at this time, and our actual results could differ materially from what I'm about to review.

In addition to the non-GAAP financial outlook under ASC 606, we also provide information on licensing billings, which is an operational metric that reflects amounts invoiced to our licensing customers during the period adjusted for certain differences. We expect revenue in the second quarter to be between $192 million and $198 million. We expect product revenue to be between $95 million and $101 million, a sequential increase of 11% at the midpoint of guidance. We expect royalty revenue to be between $72 million and $78 million and licensing billings between $76 million and $82 million. We expect Q2 non-GAAP total operating costs, which includes cost of sales to be between $114 million and $110 million.

We expect Q2 capital expenditures to be approximately $14 million. Non-GAAP operating results for the second quarter are expected to be between a profit of $78 million and $88 million. For non-GAAP interest and other income and expense, we expect $7 million of interest income. We expect non-GAAP tax expenses to be between $13.6 million and $15.2 million in Q2. We expect Q2 share count to be 110 million diluted shares outstanding. Overall, we anticipate Q2 non-GAAP earnings per share to range between $0.65 and $0.73. Let me finish with a summary on Slide 8. In closing, we delivered solid results in line with our objectives, driving ongoing profitability and cash generation.

Our diversified portfolio remains a core strength with each of the businesses contributing meaningfully to our performance. Our patent licensing business continues to deliver consistent, predictable performance, supported by the long-term agreements we have in place. Our silicon IP business is well positioned, driven by critical interconnect and security technologies, addressing the accelerating demand for AI solutions. Our product business grew 15% year-over-year and is poised for sequential growth in the second quarter. We remain focused on delivering long-term shareholder value with year-over-year revenue growth in 2026. Before I open the call up to Q&A, I would like to thank our employees for their continued teamwork and execution.

With that, I'll turn the call back to our operator to begin Q&A. Can we have our first question?

Operator: [Operator Instructions] Your first question comes from the line of Kevin Garrigan with Jefferies.

Kevin Garrigan: Can you just help us think about your product revenue into the June quarter? So last quarter, you discussed the low double-digit revenue impact from the onetime OSAT issue. And I think we may have been expecting a larger sequential increase for June just kind of given the strong -- how strong demand has been. So can you just walk us through the drivers for the June quarter product revenue and why the recovery might be a little bit more measured?

Luc Seraphin: Thank you, Kevin. Yes, sure. So the first thing I would say is that the issue that we have talked about in the prior call is behind us. Everything has been resolved. And it's a question now for us to restabilize the supply chain, which we are doing, and we see a normalization of that supply chain. So it is behind us. And the revenue for Q2 is guided at 11% over Q1. So that's the right trajectory. And we continue to expect to grow sequentially after that in an environment where our footprint continues to be very strong. I mentioned in the earlier call that it was older generation of DDR5.

The market is transitioning from Gen 2 to Gen 3, which is a good catalyst for us. So I would say we met or we guide to double-digit in the second quarter. We met what we said we would meet on the operational strain in Q1 and we will continue to grow sequentially quarters after that. We don't see any issue with the demand, and we don't see any more issues with the quality issue that we had in Q1. So we feel quite confident for the rest of the year as the market moves from Gen 2 to Gen 3.

Kevin Garrigan: Okay. Great. And then just as a follow-up on your LPDDR5 SOCAMM2 server module chipset. When would you expect to start seeing revenue from this chipset? And what kind of milestones should we watch to gauge traction?

Luc Seraphin: I would see this as having a very good strategic impact at this point in time. The financial impact in the short run this year is going to be very minimal just because the volumes are very small for this type of solutions. As a reminder, it only addresses a very small portion of the AI workloads. So volumes are small. The content is small as well. But it's strategically -- so I wouldn't put it in the model for 2026, but it's strategically very, very important because there is a trend to look at LPDDR in the server environment in the long run.

LPDDR still has issues to address the server requirements, but it also has traction and it has benefits. So we see this as a stepping stone for us. It builds on the fact that over the last few years, we have developed our product line as chipsets. So we have the whole chipset for the SOCAMM2. We have our own teams for power management development, and these are the 2 new chips that we are proposing for this solution. So we see this as a stepping stone. It allows us to engage with us with other AI players in the industry. And we are working on next generation as well.

But I don't think that the financial impact is going to be significant this year, just given the volumes.

Operator: Your next question comes from the line of Tristan Gerra with Baird.

Tristan Gerra: A quarter ago, you highlighted shortages and sounded a little bit maybe not cautious, but muted on the growth opportunity and you provided a fairly muted data center unit forecast. How are shortages for component potentially impacting your revenue this year? What are you seeing that's different now than a quarter ago? And given the outlook for DRAM to remain very tight next year, how should we look at your product revenue growth and specifically your RCD growth with excluding the new product layers that will be adding on to that from a year-over-year growth standpoint. So in other words, would you expect the same type of growth next year, year-over-year versus this year?

And I understand you're not guiding for next year, but just wanted to get a bit more color on what you see on the market that potentially could put constraint on your growth. And clearly, that's an issue for a lot of other companies as well.

Luc Seraphin: Yes. Thank you, Tristan. First of all, let me say a few words about the demand. We do see demand continue to grow for standard servers, which is good for us with agentic AI in particular. We expect the server market to grow faster this year than last year. We model it at low double-digit growth because despite the excitement around AI, there's also a large portion of the server market that is not AI related. But we do see demand growing on the server side, which is really a good catalyst for us. But as we said last quarter, we're watching the situation with supply, especially on the back end. Certainly, since last quarter, the situation has not improved.

We're working with our suppliers, but the lead times are long, and there is tension on the back end. So we take this into account when we forecast our business. This is one factor. The other factor that affects or that comes into play when we forecast is the timing of launch of new platforms in the market. As you know, it's been the case in the past for us, the launch of our new products depends on the launch of new platforms in the market, and that's a dependency that we have. So we don't see the situation as materially different than what we saw in Q1. But from a supply standpoint, things have not improved.

And we expect the supply situation to be tight going into 2027 as well when we talk to industry players.

Tristan Gerra: Okay. That's useful. And then as my follow-up question, any additional color on the MRDIMM opportunity? I know you've talked in the past about some very initial shipments late this year, specifically with inferencing. Any additional color as to where it could be in terms of revenue in '27? I think you've talked in the past about your expectation that you probably fully realized the $600 million TAM for MRDIMM by '28. So what should we be looking at for next year kind of in between? And what's really driving that? What's going to be driving the demand? Is it going to be mostly inferencing?

And any additional color you may have beyond what you've said in the past on customer interest for this technology and where it's going to ramp?

Luc Seraphin: Thank you, Tristan. First, we continue to make progress in the launch of these products and the interaction with our customers on this MRDIMM. We're excited by the opportunity for the reasons we've always talked about, larger capacity, larger bandwidth in the same ecosystem. So the adoption is easier. The main, I would say, factor affecting the ramp of our MRDIMM is going to be the timing of the launch of the platforms from Intel and AMD in particular, where they do have this capability attached in the next-generation platform. So we continue to see the ramp starting in 2027 in earnest. And a SAM at this point in time, which we still value at about $600 million.

As I keep saying, the SAM, once the products are in the market and we get feedback and the market gives us feedback, we're going to have a much better view of that SAM. But at this point in time, this is the right number to keep in mind.

Operator: Your next question comes from the line of Aaron Rakers with Wells Fargo.

Aaron Rakers: I guess kind of just building off that last question first. When you kind of think about the $600 million incremental opportunity around MRDIMM, I can appreciate that there's a lot of unknown variables at this point. But I'm just curious, as you rolled up that expectation, what assumption are you making in terms of attach rate on AMD Venice and Diamond Rapids at this point? And how might that evolve? I mean I would assume that you're being rather conservative on that attach rate at this point. And then also on that, how do you see CXL starting to play out?

Luc Seraphin: At this point in time, we model a low attach rate. As I said, until my experience is until the product is in the market, it's hard to make those models more significant. There are a lot of variables coming into play. As we just said, the most important one is the timing of rollout of these platforms in the market. There's also the whole situation with DRAM pricing and the prices of modules and how our customers' customers are going to make the decisions between the combination of modules they want to have in the current memory cycle environment. So we model, I would say, a conservative percentage for MRDIMM at this point in time.

But ramp will start when the platforms ramp in the market, and that's when we're going to have a better view.

Aaron Rakers: And any thoughts on CXL?

Luc Seraphin: Sorry, I missed the second part of your question. Sorry, Aaron. CXL, we do have very good traction on our IP business. We are not planning to launch a semiconductor product at this point in time. We do have this on our shelves, if you wish, as we designed one a couple of years ago. But we do see the -- with agentic AI, we do see demand for standard DIMMs and MRDIMMs as being the main benefactors of that. And that's where we will continue to focus our attention.

Aaron Rakers: Yes. And then one final quick one. When we -- when you guys talk about the opportunity to grow sequentially in the product revenue into the back half of the calendar year, I'm curious if you were asked about seasonality in the second half versus first half, if there's anything that changes your views maybe relative to the last couple of years. And I think you've seen some decent growth second half versus first half.

Luc Seraphin: Yes. Thanks, Aaron. That's a good observation. We actually do see second half shaping out slightly different than the first half, better growth in the second half. A lot of times, it had to do with the launch of new platforms that typically hit the market if they are on time in the second half of the year, and that's where you have more products there. But even if you look at the first half of this year at the midpoint of our guidance for Q2, and you look at the first half of last year, we're still growing close to 18%.

So the first half, despite our issue in Q1 is still much higher than the first half of last year. And we believe the second half is going to show growth. We do see some seasonality. And typically, our second half is stronger than our first half.

Operator: Your next question comes from the line of Gary Mobley with Loop Capital.

Gary Mobley: If I take the sum of your license billing in your contract and other revenue in the first half of this year for the results in the guide and compare that to the same period last year, it looks like you're generating some abnormally strong growth. Is that due to any sort of variance in the patent licensing? Or should I take this to mean that your silicon IP business might actually be running north of $150 million annually right now?

Luc Seraphin: So -- thanks, Gary. We can see some quarter-to-quarter variations in these 2 categories just for the nature of the business. I would say that underlying this, we see very good traction on our silicon IP business. Actually, AI has an impact on our silicon IP business, which is also very positive as people who develop custom solutions for AI are looking for new interfaces and new security solutions like the ones I mentioned in the prepared remarks. So we do have very good traction on the silicon IP business, and we continue to expect this business to grow 10% to 15% a year based on that.

Our other business, our patent licensing business, it can also be changing from quarter-to-quarter. We do renew agreements on a regular basis. And sometimes these agreements are structured in different ways depending on the customers and what they want to do. So we have some strong quarters, some quarters that are not too good. But on average, this business continues to be stable at $200 million, $210 million. So I would say I would not pay too much attention on the quarterly split on these revenues, but the fundamentals are really, really good.

What I would add to this is if you look at our patent licensing business, our silicon IP business or our product business, they all benefit from what's happening in the memory subsystem area. They all benefit from AI and the move from -- or the move from AI to AI inference. So -- and that gives strength to our results. And when we have a challenge like we had last quarter on the product line, then we have these 2 other product lines also that allow us to meet our numbers.

Gary Mobley: Okay. As my follow-up, I wanted to ask about CPU roles in AI-optimized servers. I think there's been a lot more noise recently indicating a higher ratio of CPUs to GPUs in AI-optimized servers driven by agentic workloads, and you sort of hinted to that. To put this into a question, I'm curious if we move to a point in time where we might see a 1:1 ratio CPU to GPU. Does this alter your view on the growth rate of your SAM for your product revenue or the size of it?

Luc Seraphin: So we are excited with where the market is evolving with agentic AI and inference. If you look at the types of architectures, software architectures, hardware architectures that inference requires, then you clearly see that the ratio between CPUs and GPUs is changing and is changing in favor of CPUs. So overall, that's a very good thing for us. It's just coming from the nature of what inference or what agentic AI is. So that's a good thing for us. Is it going to be a one-on-one? Very difficult to say at this point in time. Everyone is trying to optimize now the memory subsystems.

Everyone is trying to use HBM where it's really good, use LPDDR where it's really good and use DDR and MRDIMMs where it's really good. And I would say that DDR and MRDIMMs will continue to be the workhorse of these inference AI solutions. But the fact that all of these systems start to coexist, HBM, DDR, LPDDR, is really good. They all try to resolve a different part of the AI workload, and this plays to our strength because this is what we've been doing forever at Rambus. But I would say that the move to AI inference and the move to agentic AI will change the ratio in favor of CPUs, and that's good for us.

Operator: Your next question comes from the line of Sebastien Naji with William Blair.

Sebastien Cyrus Naji: Maybe my first question, I wanted to ask about the new SOCAMM products that you announced last week. Could you maybe just comment on what Rambus' dollar content looks like for each SOCAMM module, just across the different voltage regulators and the SPD hub? Any unit economics you can give us?

Luc Seraphin: Given the current competitive environment, I'd stay away from giving pricing on these things. But I would say that the content on a SOCAMM from the standpoint of Rambus, we have 3 voltage regulators and an SPD Hub. So the content is minimal. This is what I was saying earlier on one of the questions. I do believe that this is strategically important for us because in the long run, LPDDR may play a larger role, especially in next-generation LPDDR solutions in the data center. But from a content standpoint, it stays minimal and the volume stays minimal. I would leave it there.

Sebastien Cyrus Naji: Okay. Okay. That's fair. And maybe just turning back to the RDIMMs. Could we get an update on the progress you're seeing with companion chips? How much revenue came from those companion chips in Q1? And then maybe just relatedly, how important is it for your silicon customers that they have all of these DIMM components bundled together coming from one provider versus having to put these together from different providers?

Luc Seraphin: Yes. John, go ahead.

John Allen: Sure. The newer products, Sebastien, they're contributing low double-digit percent of our total product revenue during the first quarter. We would expect it to be roughly the same in the second quarter as we see some growth in the overall revenue contribution from that part of our business.

Luc Seraphin: Yes. And what I would add to this is that we -- this is steady growth quarter-over-quarter. You saw this in 2025, every quarter, we had a slightly higher percentage. We continue to do that. And we expect to continue to do that for the second half of the year with this. And we expect maybe to exit the year mid-double-digit of product revenues on -- coming from our new chips. Now to your other question, it is becoming more and more important for customers to have the whole chipset from one supplier, especially as the performance requirements increase.

And the reason has to do with interoperability, making sure that all of these chips on a module work well together at very, very high speed in very, very harsh environment is becoming more and more difficult to achieve. And that's why our customers request us to have the whole solution and to help them go through these generational changes.

Operator: Your next question comes from the line of Kevin Cassidy with Rosenblatt Securities.

Kevin Cassidy: During the quarter, as you're building inventory, were there any orders that you had to leave on the table that you weren't able to book because you didn't have the inventory, but maybe some upside surprise?

Luc Seraphin: No, we've not been in that situation. But there are a few market dynamics that we have to anticipate. One is, as I said earlier, we do see supply tightening, especially on the back end. So we want to make sure that we have -- if that situation continues, we have enough supply to supply our customers. The second thing that is happening is that there's fast transitions between generation. And you remember, we were talking about Generation 1 moving to Generation 2. We indicated in the last call that Generation 3 is ramping very, very fast.

So we want to make sure that on these new generation of products, we also have enough inventory because the ramps on the customer side can be quite steep, and we just don't want to miss them.

Kevin Cassidy: Okay. I understand. And maybe even when you're using your balance sheet to build more inventory, when Intel reported, they said they even were able to ship some previously written down inventory. It seems like the demand for CPUs is so strong and also DRAM that maybe older generations are -- will get a little bit of a revival. Is that anything possible? Or it sounds like you're saying everything is shifting to Gen 3 very quickly.

Luc Seraphin: From a demand standpoint, it's certainly the bulk of the demand for DDR products is shifting to Gen 3. But what you're describing in terms of using inventory of old products to serve demand is something that we continuously do and look at. That's part of our inventory management processes.

Operator: Your next question comes from the line of Mehdi Hosseini with SGI (sic) [ SIG ].

Bastien Faucon-Morin: This is Bastien filling in for Mehdi. My first question is on LPDDR, SOCAMM2 chipset. Would you mind clarifying the content of the chipset? It seems that the solution consists of 1 SPD and 3 voltage regulators. Do you expect to add any PMIC content there? And what does the pricing look like of the SPD and voltage regulator relative to the DDR, DIMM? And I have a follow-up.

Luc Seraphin: Sure. So yes, on the SOCAMM solution, we have 1 SPD Hub and 2 types of voltage regulators, 3 voltage regulators in total, but 2 types, one 12 amp regulator and two 3 amps regulators. So that's the content. So as I said, the content is minimal. You're talking about PMIC. There's no power management IC per se. That function is done by the voltage regulators in this generation of product. But the way -- that's why we say it's very strategic for us.

The way we look at this is that when LPDDR6 is available, that LP memory will offer even more speed and even more power capabilities, then it will require possibly more complex chips for power management, and we will work on those. And one can imagine as well that as the market evolves in the longer run, the market will probably need as well as the equivalent of RCDs in the long run. And this is exactly in our strategy, and that's why I'm talking about the stepping stone. We want to make sure that we are early in these new technologies. They do not cannibalize the old technology. They are complementary to them.

And in the long run, they have the potential to grow quite nicely, and they build on strengths that we have, which has to do with signal integrity and power integrity. Now in the short run, for the SOCAMM2 and LPDDR5X, as I said, the volumes and the content -- the dollar content is going to be very low, but that's a very interesting and strategic stepping stone for us in that area.

Bastien Faucon-Morin: That's really helpful. And I guess my second question is on DDR5. How should we think about the timing of the ramp of Gen 4 and Gen 5 as they go to higher volume manufacturing?

Luc Seraphin: So Gen 4 is going to start to ramp this year, but Gen 4 is a kind of a niche generation, if you wish. It doesn't have the same traction as Gen 1, Gen 2, Gen 3 or Gen 5. I think everyone is now waiting for Gen 5. We're going to start shipping products that correspond to Gen 5 towards the end of the year. But just like for the MRDIMM, Gen 5 is completely dependent on the timing of the ramps of the next-generation platforms for Intel and AMD. This is where they're going to be adopted.

And that's why we do see initial volumes this year, but the bulk of the volume just like for MRDIMM is going to start in 2027.

Operator: Your next question comes from the line of Mark Lipacis with Evercore ISI.

Mark Lipacis: A question on the DIMM attach rate. Is it different for CPUs used to perform orchestration in agentic AI versus CPUs used in standard servers versus CPUs that might be put next to the GPUs and the XPUs and the custom ASICs. Should we think about the attach rates differently for these...

Luc Seraphin: It's a very good question, very difficult question also, Mark. I would say that the way we look at it is, if you look at inference and agentic AI, the functions that have to be performed by these standard CPUs are closer to standard CPUs. I think the highest attach rate that you would find is really close to the GPUs, HBM platforms. That's where you have the heaviest load, if you wish, for these CPUs. That's how at this point in time, I would compare it. I would say, if you take a DGX box with GPUs and HBM, then the CPU there are the CPUs that use the most memory in terms of capacity and bandwidth.

I would say that when you go to inference, then it's probably a little less, but it's difficult for us at this point in time to model that.

Mark Lipacis: Sorry, I guess my phone dropped. I don't know if my question came through. But Luc, I was wondering, is -- should we think about the DIMM attach rate differently for CPUs that would be used in orchestration for agentic AI versus CPUs used in standard servers versus CPUs that are used for inferencing that get put next to the GPUs and the ASICs and the XPUs. Is there a different density there for the DIMMs?

Luc Seraphin: So it's a very good question, Mark, but a very difficult question to answer. I would say the way we look at it at this point in time is that the highest use of memory capacity and bandwidth really resides close to the GPUs and the GPU, HBM clusters, if you wish. That's where you have the most need for very high capacity and very high bandwidth, which, on average, could be higher than what we found in inference and other solutions. But we have not modeled that at this point in time. It's hard to model.

But we do see in aggregate, the fact that inference is being added to training as a very good traction for the use of standard DIMMs or MRDIMMs in general. The attach rate is difficult to model at this point in time.

Mark Lipacis: Got you. Okay. That's fair enough. And then the tightness in the back end that you're noticing, is this -- do you know or can you explain what the cause of that is? Is that because of the idea that a lot of the back end happens in Southeast Asia and they procure a lot of energy from the Middle East? Is that it? Or is it capacity? Is it more like just the whole industry is in a great recovery time and the capacity utilization rates are really ticking up. Do you have a sense of the cause of the tightness in the back end?

Luc Seraphin: There's a couple of reasons. One is the demand, especially in the data center has become very high recently. So there's increased demand there. And the second reason is that a lot of semiconductor suppliers have moved their back-end supply chain away from China to other countries in Asia, and that has put a strain on the total capacity of these back-end suppliers. So it's the combination of the 2. We've not seen an effect yet, not yet, of the war. There are discussions about some basic elements like gas that are going to be affected, but we don't see this yet.

The main reason at this point in time is increased demand, especially in the data center, combined with semiconductor companies moving their supply chains outside of China.

Mark Lipacis: Okay. That's really helpful. And the last question, if I may. The -- as you think about your market share in this year, are you of the view that you are a share gainer or you keep share flattish or down? Like what is your view on your ability to gain share?

Luc Seraphin: Yes. So we continue to gain share in '24 to '25. We were -- we exited '25, we were mid-40% share. There's no indication that we are not going to continue on that trajectory. This year, the market is really at a high level, transitioning from Gen 2 to Gen 3, and our footprint in Gen 3 is really, really good as well. So there's no sign of any erosion of the share. If we add the other components, then we'll grow faster than market because we add content as well to what do we ship to the market. So again, we're very pleased with where we were in 2025.

As you know, Mark, we tend to talk share on a yearly basis. They can fluctuate from quarter-to-quarter, but we don't see any sign of erosion of our share going into 2026.

Operator: At this time, there are no further questions. This concludes the question-and-answer session. I would now like to turn the conference back over to the company.

Luc Seraphin: Thank you, everyone, who has joined us today, for your continued interest and time. We look forward to speaking with you again soon. Have a good day.

Operator: Thank you. This now concludes today's conference.

Should you buy stock in Rambus right now?

Before you buy stock in Rambus, consider this:

The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Rambus wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.

Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you’d have $498,522!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you’d have $1,276,807!*

Now, it’s worth noting Stock Advisor’s total average return is 983% — a market-crushing outperformance compared to 200% for the S&P 500. Don't miss the latest top 10 list, available with Stock Advisor, and join an investing community built by individual investors for individual investors.

See the 10 stocks »

*Stock Advisor returns as of April 27, 2026.

This article is a transcript of this conference call produced for The Motley Fool. While we strive for our Foolish Best, there may be errors, omissions, or inaccuracies in this transcript. Parts of this article were created using Large Language Models (LLMs) based on The Motley Fool's insights and investing approach. It has been reviewed by our AI quality control systems. Since LLMs cannot (currently) own stocks, it has no positions in any of the stocks mentioned. As with all our articles, The Motley Fool does not assume any responsibility for your use of this content, and we strongly encourage you to do your own research, including listening to the call yourself and reading the company's SEC filings. Please see our Terms and Conditions for additional details, including our Obligatory Capitalized Disclaimers of Liability.

The Motley Fool has no position in any of the stocks mentioned. The Motley Fool has a disclosure policy.

Disclaimer: For information purposes only. Past performance is not indicative of future results.
placeholder
Bitcoin CME gaps at $35,000, $27,000 and $21,000, which one gets filled first?Prioritize filling the $27,000 gap and even try higher.
Author  FXStreet
Aug 22, 2023
Prioritize filling the $27,000 gap and even try higher.
placeholder
Top 3 Price Prediction Bitcoin, Ethereum, Ripple: BTC defends $40,000 as spot ETF marketing wars heat upAs the spot ETF war intensifies, Bitcoin prices may rise, and Ethereum and Ripple may also rebound under its influence.
Author  FXStreet
Dec 19, 2023
As the spot ETF war intensifies, Bitcoin prices may rise, and Ethereum and Ripple may also rebound under its influence.
placeholder
Elon Musk’s xAI and Neuralink Launch New Funding Rounds​Billionaire Elon Musk recently raised funds for his two high-profile tech companies, xAI and Neuralink.
Author  Insights
Jun 03, 2025
​Billionaire Elon Musk recently raised funds for his two high-profile tech companies, xAI and Neuralink.
placeholder
Silver Price Forecast: XAG/USD plummets below $76 as oil price posts fresh weekly highSilver price (XAG/USD) is down almost 2.3% to near $76.00 during the European trading session on Thursday. The white metal faces selling pressure as oil prices extends its winning streak for the third trading day on Thursday.
Author  FXStreet
Apr 23, Thu
Silver price (XAG/USD) is down almost 2.3% to near $76.00 during the European trading session on Thursday. The white metal faces selling pressure as oil prices extends its winning streak for the third trading day on Thursday.
placeholder
Japanese Yen extends the range play against USD; looks to BoJ for fresh impetusThe USD/JPY pair is seen consolidating in a narrow band around mid-159.00s during the Asian session on Tuesday as traders opt to wait for the crucial Bank of Japan (BoJ) before placing fresh directional bets.
Author  FXStreet
3 hours ago
The USD/JPY pair is seen consolidating in a narrow band around mid-159.00s during the Asian session on Tuesday as traders opt to wait for the crucial Bank of Japan (BoJ) before placing fresh directional bets.
goTop
quote