Image source: The Motley Fool.
Tuesday, April 7, 2026 at 5 p.m. ET
Need a quote from a Motley Fool analyst? Email pr@fool.com
Aehr Test Systems (NASDAQ:AEHR) reported a sharp acceleration in bookings to $37.2 million, driving a record effective backlog of $50.9 million, while quarterly revenue fell 44%, principally due to shipment delays in FOX systems and WaferPaks for wafer-level burn-in. A $14 million AI customer order, a major new silicon photonics win, and a key package-level burn-in production award from its lead hyperscale customer demonstrate broad adoption momentum across strategic end-markets. The company fully utilized $40 million in at-the-market equity financing this fiscal year to strengthen its cash position, supporting ongoing manufacturing capacity expansion and R&D initiatives. Management expects to return to non-GAAP profitability in the upcoming quarter and forecasts fiscal 2026 revenue and bookings near the high end of prior guidance, underpinned by a strong pipeline of AI, photonics, power semiconductor, and memory engagements.
Gayn Erickson: Thanks, Jim. Good afternoon, everyone, and welcome to our third quarter fiscal '26 earnings conference call. I'll start with an update on the key markets driving our business and strong demand we're seeing, particularly from AI and data center infrastructure. Chris will then review our financial results, and we'll open up the call for questions. We're very pleased with the strong momentum in our business across multiple market segments highlighted by more than $37 million in quarterly bookings and a book-to-bill ratio exceeding 3.5x.
Our effective backlog, which includes the backlog of $38.7 million at the end of the fiscal third quarter plus additional bookings received since the end of the quarter, is now over $50 million, a new company record. After generating approximately $20 million in bookings in our first -- in our fiscal first half, we're already 2.5x that in second half bookings and now expect to come in on the high side of the $60 million to $80 million in second half bookings I mentioned last quarter. Demand continues to accelerate across both package level and wafer level burn-in driving -- driven by increasing semiconductor complexity, power requirements and deployment in mission-critical AI, networking, automotive and industrial applications.
As devices become more advanced, the need for comprehensive test in burn-in is becoming essential to ensure reliability and performance. This is driving growing adoption of our solutions across multiple markets. So let me start with wafer-level burn-in. During the quarter, we continued to make progress in growing our installed base and expanding to new customers with our wafer level burn-in solutions. AI wafer-level burn-in is really hot right now, I guess, pun intended. We received a $14 million follow-on production order from our lead wafer-level AI accelerator processor customer for multiple new fully automated FOX-XP wafer-level burn-in systems to be used in data center training and inference applications.
The order included multiple additional FOX-XP wafer-level test and burn-in systems, each configured to test nine 300-millimeter wafers in parallel along with a set of Aehr's proprietary FOX WaferPak full wafer contactors and a fully integrated FOX WaferPak auto-aligner with each system to enable hands-free operation in high-volume production. In addition, the order included multiple additional FOX WaferPak auto-aligners to upgrade the customer's existing installed base of FOX-XP systems to full automation. Aehr is the first company to successfully demonstrate and ship a wafer-level burn-in solution for AI processors.
Our FOX-XP systems configured for very high power, high current AI processors began shipping last year and provides the highest power per wafer capability available in the market, delivering up to thousands of amperes of current per wafer. This order further expands our installed base of FOX-XP systems and adds full automation across the production lines, highlighting the growing importance of wafer-level burn-in to ensure the long-term reliability of today's very high power, high current AI processors. We're also actively engaged with multiple additional AI processor companies on benchmark evaluations and expect to make meaningful progress with those opportunities.
Our benchmark evaluation program with a top-tier AI processor supplier continues to make good progress, but it's taking longer than we originally expected. This was due to a technical misunderstanding on the clock configurations, which created some challenges with the initial WaferPak designs. While we wish we had been able to catch this earlier, we're taking device data now on their wafers with the current WaferPak design and redesigning the WaferPaks to meet the new requirements. We expect to continue to provide them with additional data on this WaferPak design as well as the improved one over the next several months.
We have several other companies ranging from suppliers of data center-focused AI accelerator processors to edge AI processors and CPUs that are providing us with information on their devices and road maps and asking about our wafer-level burn-in capabilities and recommendations for burn-in of their next-generation devices. There is significant interest in doing wafer-level burn-in for devices that are expected to put in advanced packages, such as TSMC's CoWoS-based packages that include other dies such as HBM DRAM stacks, other compute AI processors and photonic or electrical-based transceiver chipsets.
Waiting out bad devices before they're packed together with these other devices is significantly cheaper than the yield loss if these are burned in at package level and the entire multichip package is thrown away. For burn-in silicon photonics devices, we recently announced a major new customer win, a major new silicon photonics customer with an initial order for multiple high-power FOX-XP wafer-level burn-in systems for devices aimed at the hyperscale data center optical interconnect market. This customer is developing advanced silicon photonics-based transceivers for data center networking and optical I/O applications to address the rapidly accelerating demand for high-speed fiber optic communication links in hyperscale AI and cloud data centers.
These multiple systems are for both engineering qualification and high-volume production and include a FOX-XP wafer-level burn-in system configured to test nine wafers in parallel, a fully integrated WaferPak auto-aligner, multiple FOX-NP wafer-level burn-in systems and multiple full sets of FOX WaferPak full wafer contactors for production, engineering and new product introduction. These systems are all scheduled to ship in this fiscal fourth quarter ending May 29, '26. They've also provided a forecast for multiple additional XP production systems over the next year as they ramp capacity to support next-generation hyperscale data center deployments.
We believe this win positions Aehr to participate in what could be a significant multiyear expansion of silicon photonics production driven by the growth of fiber optic interconnects and hyperscale AI data centers. Additionally, we received a follow-on order from our lead silicon photonics customer for both the new high-power FOX-XP wafer-level system and an upgrade of an existing system to our latest high-power fully automated configuration. We now have fully integrated our systems and aligners with their autonomous-guided robots that carry around the 300-millimeter FOUP so the customer can operate in a fully lights-out hands-free operation. They, too, have given us a forecast for additional production systems as they ramp into next calendar year.
As data center architecture scale to support AI, cloud computing and high-performance networking, fiber optic interconnects offer significant advantages over copper wiring, including higher data rates, lower power consumption, longer reach, improved thermal performance and reduced electromagnetic interference. These advantages are driving rapid adoption of silicon photonics transceivers across hyperscale and enterprise data centers worldwide and increasing demand for cost-effective production-proven burn-in solutions that can ensure device quality and long-term reliability at volume. Aehr is the market leader in wafer-level burn-in for silicon photonics transceivers with a large installed base at leading global semiconductor and photonics companies.
The company's -- or our FOX-XP platform enables high parallelism, high-temperature and high-power wafer-level burn-in, allowing customers to stabilize their devices, a critical manufacturing process step in the laser diode emitters for these devices, as well as to identify early life failures before packaging to significantly reduce the cost of test. In gallium nitride and silicon carbide power semiconductors, we've been working with our lead GaN production customer on a significant number of new devices aimed at multiple markets that include automotive, intermediate bus conversion, data center and electrical infrastructure. This continues to be a great partnership, and we continue to work on and believe we have solved the key challenges with full wafer burn-in of GaN devices on silicon.
Wafer-level burn-in of their GaN devices for both qualification and production burn-in is an extremely valuable capability that is critical to their road map and plan, and we're both very excited to see them meet their growth projections. We continue to see GaN and silicon carbide power semiconductors as critical to the electrification of the world's infrastructure in addition to key market opportunities such as data center power delivery, electric vehicles and charging infrastructure. We won a new customer in silicon carbide this quarter with a company in Taiwan, focused on the Asian and particularly greater EV market -- greater China EV market, sorry. They placed an order for a small configured FOX-XP system for qualification and production.
Key elements of their decision included our ability to demonstrate all the capabilities they needed with our systems in Fremont, California as well as the feedback they received from customers who have data and confidence in Aehr's wafer-level burn-in systems used for testing and burn-in silicon carbide wafers across a large number of silicon carbide suppliers. We see an uptick in activity and forecast from the silicon carbide players. This makes sense as we see major OEM EV suppliers in Japan and Germany roll out a number of new EVs later this year.
These EV suppliers understand the value and need for wafer-level burn-in of these 6 devices before they're put into modules containing many devices in parallel for the EV engine drive inverters. This is well understood in the industry, and Aehr is seen as the market leader and proven solution for wafer-level burn-in silicon carbide devices used in EV inverters by a significant number of EV suppliers. We're still convert -- conservative about forecast from customers. And while we have plenty of capacity and believe we have the world's most cost-effective and highest performance wafer-level burn-in solution on the market, we're not yet counting on significant revenue from this segment to return yet.
However, it could still be a very good performing segment for us next year. We'll see. Now let me talk about wafer-level burn-in for memory. Our engagement with a key memory supplier continues to progress with additional wafer testing just this last week. We've been able to achieve the correlation they're asking for and are now in discussions about test system specifications needed for their next-generation flash memories and in particular, their high bandwidth flash devices.
We hope to close on this in the next few months, which would lead to a development agreement to supply systems and WaferPaks to them after a 12- to 18-month development of our new memory optimized blades for our FOX-XP and NP multi-wafer test and burn-in platform. But we're also now in discussions with other key memory suppliers that also produce high bandwidth memory, the new DRAM standard being used in AI GPUs in addition to standard DRAM and flash memories. The HBM memories, as I referred to, are embedded into multichip packages with advanced substrates such as the CoWoS packaging from TSMC.
NVIDIA's road map is aggressively pushing toward higher capacity, faster HBM standards to address the memory wall in AI training and inference. The upcoming road map transitions from HBM 3E to HBM 4E in 2026 and then from HBM 4E and HBM5 in the following years with capacity per GPU expected to increase from 80 gigabytes in the A100 class to over a terabyte in the Rubin Ultra by 2027 for SemiAnalysis. We are seeing the added potential for HBM insertions with our FOX multi-wafer test and burn-in system road map that extends to flash, high-bandwidth flash, DRAM and HBM memories.
This is a key focus for Aehr this year to drive to an agreement to work with these customers in the development of the enhancements needed to extend our FOX systems to these markets. This is a market that we believe could drive orders in fiscal '27 with ramps in fiscal '28. Now turning to package-level burn-in. Let me start by highlighting that we're trying to change our own vocabulary from packaged part burn-in to package level burn-in. This may seem subtle, but to give a little background, traditionally, there was one semiconductor integrated circuit per single package.
The package was used to protect the die from elements and wire out to a standard pattern of pins or pads that allowed easy handling and assembly onto a printed circuit board. This pattern or pitch between pins is much, much larger than the pitch on the individual die. So contacting the devices is very different for us between our package-level and wafer-level solutions. Historically, about 20 years ago, there was a package concept called multichip packages where multiple individual die were wire bonded into a single package. This was driven at the time for size and performance. Typically, this was much more expensive and generally, this faded out in time to other smaller package sizes.
Recently, in the last handful of years, there have been 3 major drivers of the need for new multichip packages, but this has been called advanced packaging or modules rather than MCPs. One driver, which is the biggest one, is that the multi-decade long trend that we referred to as Moore's Law has come to an end. This law was the number of transistors was doubling every 1.5 to 2 years, while the die size was staying the same, and therefore, costs were staying flat or decreasing. This allowed higher and higher performance, smaller die, and therefore, lower-cost die to be made via process improvements or die shrinks.
This drove the industry for 40 years or so until around 2010, plus or minus, when shrink started to slow materially. Then as several applications such as AI processors, extremely high-density memory such as flash and DRAM, power semiconductors were being driven by massive markets such as data center, AI and electric vehicles, the extremely high value and need for multiple devices in the same package came to fruition. This time, it was functionality and feasibility that drove this.
We now refer to these devices in 2 camps, really 3 camps: wafer level, die level and package level, where package level includes both single die package and also multi-chip modules or advanced package, multi-die packages such as those found in AI GPUs with HBM DRAM stacks, multi-stack flash SSDs and also multi-die silicon carbide modules for EV inverters and charging infrastructure. At least I hope this helps as we talk through this and make it more clear what the difference is between wafer-level and package level. You may catch me still saying package part at times as old habits are hard to break, but we'll try to refer to these as package level from now on. Okay.
During the quarter, we announced a key production win with our lead package-level hyperscale customer. This customer is a premier large-scale data center provider and selected Aehr for production burn-in of their next-generation significantly-higher-power AI processor with an initial production order of our high-power Sonoma systems. This next-generation AI ASIC is expected to move to production later this year and is believed to be even higher volumes than the first device that this customer is ramping our Sonoma systems on right now. We also expect a significant near-term follow-on order from this customer for package-level burn-in systems to support their high-volume manufacturing of their custom AI processors today, the current one used in data center training and inference.
They are forecasting a substantial expansion of Sonoma systems purchases beginning in the second half of calendar 2026 and continuing into '27. We believe it's likely that there is overlapping ramps between the current and next-generation devices, which should significantly expand both our installed base and long-term consumable opportunity with this customer. We're also engaged with multiple potential customers for package-level qualification test of AI accelerators, ASICs, network processors and edge AI processors for automotive and robotics. These engagements also represent opportunities to move to production burn-in over time. And interestingly, about half of these have also expressed interest in wafer-level burn-in addition to our package-level burn-in solutions.
Yesterday afternoon, in fact, we received an order from a brand-new customer for Sonoma to be used for reliability qualification of their new AI processor, but they may also do production burn-in with this device, which they can do with the exact same platform using Sonoma. This momentum reinforces our leadership in high-power burn-in for AI processors. The broader demand environment remains very strong. Industry forecasts indicate that hyperscale data center capacity is expected to nearly triple by 2030, driven by both new builds and upgrades to existing infrastructure. This is driving substantial growth in high-performance semiconductors and in turn, demand for advanced burn-in solutions.
As we've noted before, as our installed base of systems continues to grow, our consumables, which includes our WaferPak full-wafer contactors for wafer-level and our burn-in board and modules for package-level burn-in, can continue to grow beyond our systems. While this year has been lighter in terms of consumable sales, particularly WaferPaks, we believe it's an outlier. Some customers had bought systems ahead of the need and have grown into capacity, and this seems to be running its course. We believe, over time, our consumables business will consistently be at 30% or more of our total revenue, and our margins will increase as sales of these value-add consumables grow. To support growing demand, we're continuing to scale manufacturing capacity.
In addition to our Fremont expansion, this quarter, we'll begin shipping Sonoma systems from one of our current contract manufacturers, adding capacity of more than 20 additional Sonoma systems per month. This meaningfully increases our ability to support future growth. With expanding AI infrastructure deployments and our recent manufacturing capacity enhancement, we believe we're well positioned to support significant growth both in our wafer-level and package-level burn-in systems as customers ramp production. With strong second half bookings so far and a strong funnel of additional orders expected this quarter, we believe we're well positioned to exit the fiscal year ending May 29 with a strong backlog and deliver significant revenue growth in fiscal '27.
We currently expect full year fiscal '26 revenue to be on the high side of the $45 million to $50 million range provided last quarter. We also expect our bookings for the second half of the fiscal year to be on the high side of the $60 million to $80 million range provided last quarter. More broadly, we believe we have a clear path to sustain long-term growth as our installed base expands across AI, silicon photonics, power semiconductors, memory and other high-performance applications. As semiconductor performance and reliability requirements continue to rise, burn-in is becoming increasingly critical across a growing set of applications.
We believe Aehr is uniquely positioned as the only provider offering both wafer-level and package-level burn-in solutions at scale. With that, I'll turn it over to Chris.
Chris Siu: Thank you, Gayn, and good afternoon, everyone. I'll begin with bookings and backlog and walk through our third quarter financial performance, cash position, outlook and investor activity. The company recognized bookings of $37.2 million in the third quarter of fiscal 2026, significantly higher than the $6.2 million in the second quarter as we have received multiple purchase orders for FOX systems, WaferPak and several auto aligners from different customers for AI, silicon photonics and silicon carbide applications. At the end of the quarter, our backlog was $38.7 million. During the first 5 weeks of the fourth quarter, we received an additional $12.2 million in bookings.
This increase was driven primarily by major new silicon photonics customer for wafer-level burn-in with an initial order for multiple FOX systems for both engineering qualification and high-volume production, which we recently announced. With these recent bookings, our effective backlog, which includes our quarter-end backlog plus additional bookings received since the end of the third quarter has now grown to a record of $50.9 million, providing strong visibility for the remainder of fiscal 2026 and positioning us for significant growth for fiscal 2027. Our strong bookings include increased demand for both wafer-level and package-level burn-in solutions.
We believe this reflects the proven value of these differentiated solutions which are increasingly integral to the production and reliability strategies of our customers in the AI, data center and other key markets we serve. Turning to our Q3 performance. While we did not provide quarterly guidance, our third quarter revenue of $10.3 million was in line with internal expectations due to delayed orders. Q3 revenue was slightly below consensus and down 44% from $18.3 million in the prior year period. The decline was primarily driven by lower shipments of FOX systems and WaferPaks for wafer-level burn-in business, partially offset by stronger demand for our Sonoma systems and BIM from our hyperscale customer.
Contactor revenues, which include WaferPaks, while wafer-level burn-in business and BIMs and BIPs for package-level burn-in business totaled $3 million, representing 29% of total revenue in the third quarter. This compares to $5.9 million or 32% of revenue in Q3 last year. Non-GAAP gross margin for the third quarter was 36.5% compared to 42.7% a year ago. The year-over-year decline reflects lower overall sales volume and a less favorable product mix as last year quarter included a higher proportion of high-margin WaferPak revenue. Non-GAAP operating expenses in the third quarter was $6.3 million, flat from $6.3 million in Q3 last year. We continue to invest significant resources in our AI benchmark and memory projects.
During the quarter, we recorded an income tax benefit of $0.8 million, resulting in an effective tax rate of 19.9%. Non-GAAP net loss for the third quarter, which excludes the impact of stock-based compensation and acquisition-related adjustments, was $1.5 million or a loss of $0.05 per diluted share compared to net income of $2 million or $0.07 per diluted share in the third quarter of fiscal 2025. Non-GAAP net loss for the third quarter exceeded consensus by $0.02. Turning to cash flow. We used $3.7 million in operating cash during the third quarter. We ended the quarter with $37.1 million in cash, cash equivalents and restricted cash, up from $31 million at the end of Q2.
The increase was primarily due to proceeds from our at-the-market, or ATM, equity program. During the third quarter of fiscal 2026, we raised $10.5 million in gross proceeds through the sale of about 269,000 shares. Since the end of Q3, we raised another $19.5 million gross proceeds through the sale of about 477,000 shares. And with the $9.9 million we raised in Q2, we have now fully utilized $40 million available under the ATM and have sold over 1.13 million shares at an average price of $35.38.
We also announced this afternoon that we'll be changing our fiscal year from the last Friday of May to the last Friday of June effective after our fiscal year ends on May 29, 2026. Our new fiscal year 2027 will begin on June 27, 2026, and end on June 25, 2027, continuing with the 4-4-5 calendar. As a result, we will have 1 month of financial results from May 30, 2026, to June 26, 2026, which will be reported as a transition period when we file our quarterly Form 10-Q in the first quarter ending September 25, 2026.
We believe our new fiscal year will align more closely with the reporting periods of our customers and our peers in the semiconductor test equipment industry. Moving to our outlook. For the full year fiscal 2026 ending on May 29, 2026, we currently expect total revenue to be on the high side of the $45 million to $50 million range provided last quarter and non-GAAP net loss per diluted share to be between negative $0.13 and negative $0.09 for the full fiscal year. We expect our gross margin to improve as our manufacturing activity increases to support higher sales volume and better absorb our fixed costs.
We also expect to return to profitability on a non-GAAP basis in the fourth quarter of fiscal 2026. Lastly, looking at Investor Relations calendar. Aehr Test will be participating in 2 investor conferences over the next couple of months. We'll be meeting with investors at the Craig Hallum Institutional Investor Conference taking place in Minneapolis on May 28, and we'll be presenting a meeting with investors on June 2 at the William Blair 46th Annual Growth Conference taking place in Chicago. We hope to see some of you at these conferences. That concludes our prepared remarks. We're now happy to take your questions. Operator, please go ahead.
Operator: [Operator Instructions] Our first question comes from Mark Shooter with William Blair.
Mark Shooter: You have Mark Shooter on here for Jed Dorsheimer. Congrats on all the progress, especially with the hyperscaler. I'm curious how you guys are looking at this internally? And what percentage of GPUs or ASICs or XPUs do you think are burnt in today? And how do you guys size the vector space?
Gayn Erickson: That's really a good question, and I think we're still getting our arms around a little bit here. I would say that we've been a little bit surprised at how many devices are not yet doing production burn-in. One of the things that we mentioned strategically when we purchased Incal what 18 months ago or so, Incal does a type of burn-in and they were known for it called qualification reliability burn-in, which all processors go through, in fact, all semiconductors. It's what determines their lifetime reliability specs and that they will last long enough, et cetera. So sort of a onetime deal, you do with a large number of devices to do the statistics on it.
Then certain devices go through a screening in production to weed out infant mortalities because the failure rate is higher than the market will bear, okay? So Incal was doing this with a large number of AI customers. But actually, prior to that, wasn't doing any production burn-in. When we acquired them, we've now -- because of the capacity we have in terms of people and infrastructure, we've been able to capture this large hyperscaler and are engaged with multiple others. But one of the things that I've been surprised at is that how many of the, I guess, particularly, the ASIC suppliers don't do production burn-in yet or are talking about doing it.
And that goes for a lot of different devices that are out there from edge, robotics, ASIC, network processors and even -- I want to always be careful at GPU because everybody just associates GPU only with NVIDIA. But not all devices are burnt in still today. And so there are certain ones that are, there are certain ones that aren't. And even within a company, they may have some of their products are burnt in and others aren't. However, the common theme is they're all moving to burn-in. The data is out now that there's solutions out there like Sonoma or the wafer-level burn-in of our FOX system that can cost effectively do it.
And so now there's a very viable alternative to doing it at the system level or the rack level. We've said in the past that many of these guys would actually build it all the way to the rack and then at the system integrator, they would burn it in for a week or 2 and weed out the infant mortality to ship it or in some cases, with the ASIC suppliers, they just shipped it into their data centers and dealt with the fallout. So it's growing.
I'm trying to think if I try and put a percentage, I think on ASICs, it might be by unit quite like SKU, I mean, I don't know if it's 20%, maybe it's 5% of the -- so most ASICs are not burnt-in. I would say on the AI accelerators that are out there across the wide variety, maybe half. But then what's happening is the processors are getting higher power from generation to generation and breaking all the tools that are out there.
So even the tools that were out there, and I'm not giving any inside information whatsoever, but just what's classically understood, like NVIDIA's processors of couple generations ago compared to their current ones, their power is substantially more, which would require new tools. And the ones that they're working on and others in a year and out -- and again, just what's publicly available, break the current tools. And so there's a continuous road map. And so even within our Sonoma platform, we're continuing to add capabilities. It's one of the key features we have is the ability to adapt it and add higher and higher current and power as you go forward.
So how many times you hear a CEO say, you're at the early innings, but this is still at the kind of the beginning phases of this. And over time, people will be buying a lot more burn-in systems as a percentage, meaning to cover the percentage of total and then just ensure quantity.
Mark Shooter: I appreciate all the color, Gayn. That's very helpful. To zero in a bit on your hyperscaler customer, can you bring us a little into the room a bit here and what was the decision process to go with package-level, right, not packaged part anymore, it's package-level versus wafer-level? And do you see a transition potentially with this customer to move to wafer level? And if you get a new customer, is there -- do you think that they'll make the same decision? Or is there a track towards wafer level? Like try to help us out with that.
Gayn Erickson: Okay. So to be fair, 2, 3 years ago, if you would have asked me, we said -- I've said this before, can you do wafer-level burn-in of AI processors, I think we would have said absolutely not. We didn't have the power and the system, and the belief was that there weren't the test modes that we now understand there are to be able to do it. And now as we've gone from customer to customer across a wide variety, there's commonalities about it that allow us to be able to confidently tell them we can do wafer-level burn-in.
So prior to that, it was whether you did package-level burn-in or not or did it at, say, the rack level, okay? So people first step is, do I do burn-in, then they're going to default to thinking I'm going to do it at the package level. But then what we're seeing, and I mentioned this before, we have customers -- I don't want to get too carried away here, but the last 2 customers that were in, in the last 2 weeks, Alberto is our package-level burn-in VP and Vernon really runs kind of the wafer-level side of things.
The customer will come in and say, I want to talk about package level, and about halfway through the tour, they're like what is that? We talk about wafer level. They're like, whoa, whoa, whoa, how do I do that? And so we kind of joke about it around here. It's like ah. But the reality is, we don't care which side you go to. We have both. Specifically, on the hyperscaler and I've said this out loud before, the first device they ran with us, it's not their first device, but it's the first one they went to production on is on Sonoma.
Their second device, they just awarded us with production for that one and are planning the ramp of that with us right now. They are already on the road map talking about the third device, and they've asked us about the DFT to specifically put into the third device because they would like to consider that for wafer level on our FOX systems. So I think that's sort of a progression that we will see. And I would actually imagine large customers that have multiple different product lines, some they would do wafer level on and some they might do package level on.
It becomes particularly valuable when you have a -- like a package that has multiple processors in it and all the HBM memory, right? So in those particular ones, I mean, the co-op substrate is more expensive than the silicon itself or the processor, which sounds crazy. So they would be very interested in doing the wafer-level to screen out the die before they have to throw away everything else. So I think there's a progression over time where people will move towards wafer-level on the things they can default to package-level where they can't.
Operator: Next question comes from Christian Schwab with Craig-Hallum.
Christian Schwab: Thanks for a tremendous amount of detail regarding the different target markets and your success in each one of them. The most common question I receive is, is there a way to gauge over a multiyear time frame? Obviously, you gave guidance for this year in support of substantial growth the following year with bookings in hand and others to come. But as -- if you had enough time to give some thought to the range of potential outcomes over a multiyear time frame that you guys could do in combination of your target markets and potential entry into the market -- memory market down the road.
Gayn Erickson: So the short answer is we have. The long answer is it's -- we're just really cautious about trying to get too carried away with our projections. But the numbers are very significant. If you just -- because particularly now that there's some hung down memory kind of angle on this thing, too.
If you look at the dollar spend that people are going to do on whether you call it compute or AI or if you look at the compute capability, right, that are going into training and inference in data centers, inference and edge, automotive, robotics, the number of different applications and the way people are using it and deploying it, the amount of silicon wafers is staggering and why people talk about these enormous dollars. Those devices -- a processor has always been burnt in. I want to -- it feels like I'm contradicting what I said earlier.
It's widely known that Intel and AMD, the primary processor suppliers of the world, burnt in every one of their processors and always have, right? When the first GPUs were coming out, those were using graphics, they were not burnt in. And the initial people that are all related to AI are our foundries and they're out looking for burn-in capability. There were no burn-in systems in the foundry OSAT models. And so people weren't spending on. They spent enormous amounts of money on test and it's growing. And they're going to be spending a significant portion of their test budget on burn-in going forward. I hear things -- I mean, I hear it constantly from the customers rotating through.
So the TAMs are multi-hundreds of millions of dollars for package-level burn-in. Wafer-level burn-in, if you say it displaces package-level is even higher. The average actual price per unit time of wafer-level is actually more expensive than package-level. But the yield pays for all of it. And so it's cheaper to the customer to spend more money. And so the TAMs are larger there. If you look at the memory side of things, if you look at the memory spend of the number of fabs that are coming out in the next in 5 years, what percentage of budget is for their test budget, it -- these are big numbers.
And so the spend is -- in burn-in is probably total spend measured in multiple billions of dollars per year in the next couple of years on an annual basis. And the question is, well, then wait a minute, how come you guys aren't $500 million? And the answer is we think that we have a very good opportunity to significantly grow our package-level and wafer-level business across the biggest segments that are driving burn-in and one of the reasons we're leading with putting infrastructure and capacity in place to be able to have the conversations we're having these customers that are throwing out some really big numbers.
And somebody warned me, you're getting carried away here, but it's an awesome place to be, and it's not only Silicon Carbide for EVs where lots of people are wondering that the EVs are ever going to make it. As you guys know the history, it's like people got ahead of themselves, and I was even saying it. It's like, come on, you guys. No one's going to be -- we're not all going to be driving EVs. But the TAMs in these segments are significantly larger than anything we ever talked about on the power semiconductor side.
Operator: The next question comes from Max Michaelis with Lake Street Capital Markets.
Maxwell Michaelis: First, I want to start out here. When you look at the demand environment from the package-level and wafer-level, the demand seems strong on both sides of the business here. But I mean, to me, it looks like wafer-level has seen some -- is outpacing on the demand side and maybe the order side. Can you let me know if I'm wrong there, but is there anything else you can add as well.
Gayn Erickson: The challenge with our business and for all of our shareholders is we know how to be lumpy. And by having more markets and more customers, it can make it less lumpy. But the ASP of a production order set in wafer-level burn-in can be $10 million to $20 million in an order, let's say, okay? Package-level can be that big or bigger, too, okay? So when they come in, it looks like, oh, right now, we see demand on both significant. Now the engagement and the work to get a wafer-level burn-in is definitely harder than package level. And the obvious reason is, in many cases, we're already testing the part for the quality on our tool.
So now they have to just say, oh, I need to buy a whole bunch of them and add automation and go to production. Does that make sense? On wafer-level, what we found is that there's a learning process by both sides a little bit, but to understand how they can use our tool to be able to test their part. And in some cases, they're like, okay, I know if I just did this, it would make it a lot easier. But it's too late. I already taped out this part. That would be an example of this benchmark I'm in right now. It's like they're having to use some -- a little fancier WaferPak to do it.
And if they just did some specific DFT, they could use a very simple WaferPak, the same WaferPak we're using for like silicon photonics or Silicon Carbide and some of these others. Their vocabulary with us is, oh, I'll be able to do that for the next gen, but can you just work around it with the current one? Well, it's kind of harder. The other one, as I mentioned, I want to get a little too carried away. I mean I get pretty techy on these things. But we had a miscommunication on the clock, which is something really simple candidly. But if you do them wrong, it doesn't work.
And so we've had to jury-rig some stuff to actually get it to work, and we're going to spin it to make it work. Nobody is freaking out about it because this isn't rocket science, but it would be something we would never mess up again with that customer because that we now both have the same vocabulary. Second one is always easier. And so there's a little bit more startup thing with the wafer-level burn-in. But if you're technically astute and engaged and you look at it, you're not going, oh, this isn't going to work. You just go, okay, gosh, that's too bad that. Okay. Now let's keep going. And so there's a learning process.
We're getting faster at it. And I think, over time, wafer-level burn-in -- like the Silicon Carbide or the silicon photonics customer we won this last quarter, it was just yes. I mean there was no one way for benchmark. It went from can you do it to how fast can you deliver, okay? I think that is a natural progression. You'll see it in our package level, and you'll see it in our wafer-level over time where customers will engage, they'll know we can do it, and they just say let's go.
Operator: [Operator Instructions] The next question comes from Larry Chlebina with Chlebina Capital.
Larry Chlebina: Gayn, your contract manufacturer that you're starting up, when does that start? And when will it be fully capable of doing your 20 Sonomas a month?
Gayn Erickson: They're -- they've already built. They're in the process of building the first batch, I would say is the best way of looking at it. There's -- it's a little more complicated than the way I described, but there's sort of -- there's actually 2 contract manufacturers together and then one feeds into the other one. The first -- the one that feeds into the other one did their prototypes, they sent to us, we were going through a kind of an acceptance process to validate it to work out any kinks, then those go to the other contract manufacturer for final system integration and shipping.
The other one is when we were out there, we visited them last September, I think, we did kind of an audit of facility power infrastructure and cleanliness and they did a kind of a remodel similar to ours if people have seen it. It's all white and fancy clean floors, more clean room space so that we can actually build these things in a clean room area. They had facilities, they were doing some stuff for solar as it turns out. And so we were able to leverage from that. And that is in place now. And we think our first products would be ready to ship to customers this quarter through May.
And what we want to make sure is they're ready to go by late summer when we see the Sonoma ramp hitting.
Larry Chlebina: That was the really my question. Are you keeping any capacity? Or are you planning on producing those systems in Fremont as well or is...
Gayn Erickson: Yes, for sure. But this is in addition to. We've kind of talked about like about a 20 system per month capacity here from an infrastructure and footprint perspective. We would actually still need to hire some more people, maybe to take on a shift. But we'll use -- we use that facility for like large volume orders of the same SKU, if you will, make it simple. And then we'll use -- we'll continue to make Sonoma systems here and all of the XPs will be built out of here, all the FOX products.
Larry Chlebina: Great. And then did I hear you say that your first expected XP sales to an HBM customer will be this calendar year or next fiscal year '27?
Gayn Erickson: Yes, I didn't quite say anything. I was a little more elusive than that on purpose. What I will tell you is that we have identified some interesting opportunities with HBM, probably the new 4E that it has some interesting challenges that people would really like to do this wafer-level burn-in on. And between our FOX system as it stands and the road map that we've been working on, as people know, with a team of people here for a memory extension to the FOX system to add what we would call channel modules into the FOX that make it memory focused, we think that there's some real overlap there.
That just as you know, Larry, you follow this a lot. That's an uptick, okay? I thought HBM had a pack flash and it is in parallel with flash now.
Larry Chlebina: I would say that would be an uptick. Yes, I agree -- a little bit of an uptick.
Gayn Erickson: An order would be a good uptick. I'll agree with you. But right now, I'm excited about the discussions.
Larry Chlebina: Yes. So the flash engagement, is that -- do you think that will bear fruit on the enterprise side here shortly before HBF gets underway, the effort that you're going to have.
Gayn Erickson: That's a good question. I think it really is up to the customer kind of the timing of what we would build would be something that would be a superset that could do both. So yes, if HBF were delayed a little bit, maybe we would intercept their standard products. They've asked us to build it. The definition discussion has been to do both. In some ways, HBF is easier, okay? Then -- because if you start saying it's all flash, a lot of times what happens is people say, well, I want to be able to test everything I've ever had before. And then as the interfaces evolve, they tend to converge in voltages and speed or whatever.
And if you say, well, I want legacy, it's like, well, okay, I've got to support this old voltage or something on a device you don't make anymore. So part of the challenge for us is to try and kind of converge on what do you really need going forward, where are you going to spend the money. They probably never buy a system for legacy products from us in general. So I think that's one of the challenges we get to work through.
Larry Chlebina: That's all I had. Boy, you got a lot of irons in the fire.
Gayn Erickson: It is so much fun, you guys, I'm telling you. You -- yes, the -- Vernon and Alberto and the R&D teams and the poor Nick, our WaferPak team is very busy right now. And we're doing some things to offload that adding additional resources. We're hiring anybody looking for a great job with a company that's growing, let us know. We've got a lot of reqs out there, and we're looking for great people. So it's...
Larry Chlebina: It sounds like it is a lot of fun and congratulations. I know you've been working at it for a good while to get to this point.
Operator: [Operator Instructions]
Gayn Erickson: All right, operator, if there's no other questions, we'll end on a really happy note. And as always, if you guys have any questions, please feel free to reach out to us. If you happen to be in the Bay Area and want to try and stop by, we're always happy to give a short tour to key investors and things like that. And we look forward to a great quarter and talking to you next quarter. I guess with our new fiscal year, now our quarterly earnings will be the same time the next time, and then there'll be a I guess, a 1-month push or something like that, but it should work out.
This will be a good thing for our customers, which, honestly, that's the key to all of this. All right. Thank you very much, folks. Bye-bye.
Operator: Thank you. This concludes today's conference, and you may disconnect your lines at this time. Thank you for your participation.
Before you buy stock in Aehr Test Systems, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Aehr Test Systems wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you’d have $532,929!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you’d have $1,091,848!*
Now, it’s worth noting Stock Advisor’s total average return is 928% — a market-crushing outperformance compared to 186% for the S&P 500. Don't miss the latest top 10 list, available with Stock Advisor, and join an investing community built by individual investors for individual investors.
See the 10 stocks »
*Stock Advisor returns as of April 8, 2026.
This article is a transcript of this conference call produced for The Motley Fool. While we strive for our Foolish Best, there may be errors, omissions, or inaccuracies in this transcript. Parts of this article were created using Large Language Models (LLMs) based on The Motley Fool's insights and investing approach. It has been reviewed by our AI quality control systems. Since LLMs cannot (currently) own stocks, it has no positions in any of the stocks mentioned. As with all our articles, The Motley Fool does not assume any responsibility for your use of this content, and we strongly encourage you to do your own research, including listening to the call yourself and reading the company's SEC filings. Please see our Terms and Conditions for additional details, including our Obligatory Capitalized Disclaimers of Liability.
The Motley Fool has no position in any of the stocks mentioned. The Motley Fool has a disclosure policy.