Image source: The Motley Fool.
Tuesday, May 20, 2025 at 8 a.m. ET
Chief Executive Officer — Arkady Volozh
Chief Product Officer — Andrey Korolenko
Chief Business Development Officer — Daniel Bounds
Chief Operations Officer — Roman Chernin
Chief Financial Officer — Tom Blackwell
Need a quote from one of our analysts? Email pr@fool.com
Revenue Growth: Revenue increased nearly 400% year-over-year in Q1 2025. Annualized run rate revenue increased nearly 700% year-over-year.
Cash Balance: Ended Q1 2025 with $1.44 billion in cash.
Capacity Expansion: Added three new locations; Israel was highlighted as a key new market, with Iceland and Finland also coming online or expanding.
Product Launches: Released approximately 50 new products in the AI cloud and AI Studio platforms, including the Slurm-based cluster upgrade, enhanced object storage, and general availability of MLflow and JupyterLab notebook services.
Operational Efficiency: Achieved a 5% improvement in available compute nodes for commercial use through platform reliability upgrades in Q1 2025.
Partnerships: Announced new and deepened partnerships with NVIDIA, including launch partner status for NVIDIA Blackwell Ultra AI Factory, becoming one of five NVIDIA Cloud Partners, and supporting the DGX Cloud Lepton marketplace at launch.
Customer Base: Grew to hundreds of managed and self-service customers across sectors such as technology, media, entertainment, and life sciences.
Quarterly CapEx: Spent $544 million in Q1 2025 toward an updated full-year CapEx plan of approximately $2 billion. This compares to the prior $1.5 billion CapEx guidance for 2025. The increase in CapEx for 2025 is attributed to shifted spending and opportunistic expansion, including the Israel data center.
Revenue Guidance: Reiterated full-year group revenue guidance of $500 million to $700 million for 2025. Annualized run rate revenue (“ARR”) guidance is $750 million to $1 billion by year-end 2025.
Profitability Outlook: Expects negative adjusted EBITDA for the full year but plans to achieve positive adjusted EBITDA during the second half of 2025. The core infrastructure business could potentially reach this target in Q3 2025.
Medium-Term Outlook: Targets mid-single-digit billions of dollars in revenue and EBIT margins of 20%-30% in the midterm (within a few years), with possible upside if enterprise adoption accelerates.
Depreciation Policy: Maintains a full-year depreciation schedule, more conservative than the 5- or 6-year schedules typical in the industry.
Non-Core Asset Monetization: Holds a 28% minority stake in ClickHouse (reportedly valued at ~$6 billion), a significant majority economic interest in Toloka, and a stake in Avride, all identified as future capital sources for core AI infrastructure investment.
Toloka Deconsolidation: Will deconsolidate Toloka as voting control fell below 50%, with updated financials and guidance to be provided in the Q2 report.
Market Segments: Core customer base remains AI-native startups; enterprise and frontier AI lab markets are targeted for growth, with ambitions to enter global and national AI project sectors.
Data Center Roadmap: Deploying over 100 megawatts of capacity in 2025, with a pipeline that could potentially exceed 1 gigawatt in the midterm (defined as a few years).
GPU Rollout: Executing deployments of NVIDIA H200, Blackwell, and Grace Blackwell (GB200) GPU generations, with Blackwell Ultra slated for Q3.
Contract Structure: Contract durations range from several months to over a year, with the introduction of Blackwell and newer GPU generations is facilitating longer-term commitments, as discussed in Q1 2025.
Tariff Risk: Management stated the current tariff environment is not expected to have significant impact on costs or expansion plans but acknowledged ongoing monitoring due to potential volatility.
Nebius Group N.V. (NBIS) Revenue increased nearly 400% year-over-year in Q1 2025, and expanded its AI infrastructure capacity with multiple new site launches and aggressive product rollout in the first quarter. The company reinforced its financial position and operational flexibility through increased CapEx investment, with approximately $2 billion in CapEx planned for 2025, a sizable cash balance, and strategic monetization plans for non-core assets. Competitive positioning was strengthened through landmark partnerships with NVIDIA and differentiated software offerings now integral to its full-stack AI cloud solution.
Chief Financial Officer Tom Blackwell emphasized, "we expect adjusted EBITDA to be negative for the full year, we plan to turn positive at some point in the second half of 2025."
Recent product advances included integration with external AI platforms such as Metaflow, dstack, and SkyPilot, enabling customers to transition workloads with minimal friction.
Roman Chernin stated, "April's annualized run rate revenue was $310 million, and we are continuing to experience strong demand into May."
One of the five NVIDIA Cloud Partners, Nebius receives industry validation via inclusion in the SemiAnalysis GPU Cloud ClusterMAX ranking system.
The deconsolidation of Toloka, triggered by the loss of majority voting control, will result in future Nebius financials excluding Toloka’s direct contribution after May, with updated financials and guidance excluding Toloka to be provided in the Q2 2025 report, with new reporting to be issued in the next quarter.
Strategies for future capital deployment rely on retaining “significant majority economic interest” in non-core holdings following equity investment events, as specified during the call.
Slurm-based cluster: A type of resource management and scheduling system for high-performance computing workloads, typically used for managing large GPU clusters for AI training.
GPU Cloud ClusterMAX: An industry rating system evaluating the performance, scale, and availability of GPU-enabled cloud clusters for AI workloads.
AI Factory: Refers to large-scale, purpose-built data center infrastructure optimized for training and serving AI models, often using the latest GPU technology.
Blackwell Ultra: NVIDIA's next generation of high-performance GPUs, intended for large-scale AI model training and inference in cloud environments.
DGX Cloud Lepton: An NVIDIA AI cloud infrastructure offering that enables rapid deployment of high-end GPU clusters for demanding AI applications.
ARR (Annualized Run Rate Revenue): The revenue run rate calculated by annualizing the most recent month’s revenue, used as a forward-looking indicator for subscription-based or usage-driven businesses.
Neocloud: Industry term denoting new-generation, AI-centric cloud service providers with infrastructure, software, and services optimized specifically for large-scale AI operations.
Arkady Volozh: Yes. Thanks. Thanks, Neil. Thank you, everyone, for joining our Q1 2025 results call. I will start with saying that demand for AI compute was very strong in first quarter. And actually, our results show it. Our revenue grew nearly 400% year-over-year, and our annualized run rate revenue grew nearly 700%. We saw great momentum in our core infrastructure business. We ended the quarter with a solid cash balance of $1.4 billion and we actually continue to invest in our infrastructure. To that point, we are rapidly building our capacity to serve customers around the world.
This is a global race as you understand, and we are well placed with our footprint in the U.S., Europe and now in the Middle East. As you can see here on the slide, we added three new locations recently, and there is more to come. We're exploring new locations for capacity build-out, and we hope to share more with news with you very soon. We also announced some new partnerships this quarter to build our strong relationship with NVIDIA as well as Meta and Llama. And finally, we had a very productive quarter with respect to building out our technology stack. And we are getting industry recognition for our AI cloud offering.
Here, as an example, SemiAnalysis ranked us in GPU Cloud ClusterMAX rating system. And now I'll hand over to Andrey Korolenko to discuss some of the key products we launched in Q1.
Andrey Korolenko: Thank you, Arkady, and hello, everyone. We believe the debt and the quality of our care is significantly differentiate against the other Neoclouds. We made a great progress in Q1 in further developing our AI cloud offering and had a number of notable product launches. We -- first of all, we launched the Slurm-based cluster upgrades such as automatic recovery for the failed nodes proactive system health checks, digital issues before jobs actually fail. These changes reduces downtime for customers. And improved capacity availability on our infrastructure, which led to around 5% improvement on available nodes for the commercial use, which is quite significant.
Several platform services were released and moved from the beta phase to the general availability, MLflow and JupyterLab notebook as an examples, but there was much more. We also invested a lot of time and efforts in reliability and performance of the platform. Notably, we launched an enhanced object storage, and this ensures that large data sets can be assessed and saved quickly during model training runs, reducing client results. In building on that foundation of our homegrown storage capabilities, we have also partnered with three leading storage providers such as DDN, VAST and WEKA. And that enables us to deliver the best possible experience for all customer scenarios going forward with the Blackwell generation clusters.
Last, but not least, we expanded integrations with external AI platforms such as Metaflow, dstack, SkyPilot and that allows customers to bring the existing goals into our ecosystem with minimal friction. And partners, Daniel will talk about our partners.
Daniel Bounds: Thanks, Andrey. In addition to strengthening our product in Q1, we also made significant progress towards expanding our partner ecosystem. From further building out our data storage solution portfolio, as Andrey mentioned, with industry leaders, we extended our core AI cloud capabilities to the ISV landscape with tight technical integration and we made announcements enabling customers to consume Nebius infrastructure across a wide segment of the industry. Equally important are the relationships we have with the full range of AI marketplaces, established channel partners that help us meet the customer demand for our AI infrastructure across the globe. I'd also like to talk about NVIDIA. As you know, NVIDIA is an investor in our company.
We have a long history of working with the NVIDIA team and we want to continue to build on that relationship. In Q1, we made several announcements with them. Point one, in the Q1 time frame, Nebius and NVIDIA announced that Nebius would be one of the first AI clouds to offer the NVIDIA Blackwell Ultra AI Factory platform. We also became a launch partner for NVIDIA Dynamo, one of the most efficient solutions for scale and compute during inference. And Nebius was also named 1 of 5 reference platform NVIDIA Cloud Partners, in this time frame, helping us as we specialize and deliver AI accelerated services built on NCP reference architectures.
And finally, some breaking news, Nebius will support the NVIDIA DGX Cloud Lepton marketplace at launch. We couldn't be more excited with our partnerships, not just with NVIDIA, but across the landscape. So with that, I would like to hand it over to Roman.
Roman Chernin: Yes. Thank you, Daniel. Let's speak a little bit about customers. Our strategy is to serve a wide variety of customers with our robust platform. We have hundreds of customers both managed and self-service, who use Nebius Cloud platform for training and inference workloads across various industries such as tech, media and entertainment, life science and more. With our expanding capacity footprint and global sales support, we are now able to serve customers 24x7 with truly tailored approach of our high-level experts on both sides of Atlantic that together with the advanced software platform goes beyond commoditized GPU as a service offerings.
This highlights our flexibility and ability to rapidly adapt to the evolving needs of our diverse customer base while delivering high-quality solutions powered by our tech stack. This is what our customers value the most. They recognize that we are building an AI specialized cloud with happy scalar level of capabilities. Actually, all those factors contributed to our strong Q1 results. And going forward, the demand environment for AI compute remains robust. And our sales momentum has continued into Q2. April's annualized run rate revenue was $310 million, and we are continuing to experience strong demand into May. And now I'll pass it to Tom Blackwell to walk through our guidance.
Tom Blackwell: Yes. Thanks very much, Roman. And so as Roman said, we've had a great start to the year, a very, very strong first quarter. And we've carried in -- we're carrying in strong momentum into the second quarter. So we feel very confident in our ability to achieve the ARR guidance for the whole year that we gave, which was $750 million to $1 billion. We're well on track to achieve this. So we're also reiterating our overall revenue guidance for the group, which is in the range of $500 million to $700 million. So thinking -- we're turning to profitability here. So we're maintaining our adjusted EBITDA guidance for the full year.
So just to elaborate on that a bit, we expect -- while we expect adjusted EBITDA to be negative for the full year, we plan to turn positive at some point in the second half of 2025. On CapEx, we're currently planning CapEx of approximately $2 billion for 2025. And this is a bit up from the previous guidance of $1.5 billion due to a couple of factors. First, we had some CapEx spend that had been planned for late Q4, which actually fell in early Q1. So some of that -- that would -- leads to the increase towards $2 billion.
And also, as we've always said that we want to be opportunistic when it comes to really ramping up our infrastructure capacity as we see demand, and so we want to be able to sort of chase demand -- and secure that demand well. And so we've had -- we've considered some additional investments beyond the initial data center expansion plan, for example, you may have seen some coverage recently around the data center in Israel, which we think is a great opportunity. It's a great market, and actually -- will come on and give some more color around that at later on the call. So looking to the midterm, this is a great business.
It's in a great industry, and we think the future opportunity is immense. When we look at the midterm, we think -- we believe that this business will achieve mid-single-digit billions of dollars in revenue and we're actively building out our capacity pipeline to support that scale of revenue growth. The reality is that there are also scenarios where we could grow more aggressively. And so Andrey and his team are very focused on really building out the whole infrastructure potential pipeline that would enable us to deliver potentially more than 1 gigawatt of capacity in Midtown.
So if we do that, that would obviously -- that would allow us to achieve significantly more revenue than the kind of midterm guidance that we're talking about here. So we'll be opportunistic, and we'll go after opportunities as we see them. Yes. I think some of the factors that could drive that additional incremental growth on top of the midterm guidance as we see more adoption from enterprise-level customers and also potential sort of larger, longer-term contracts. And again, we'll give -- Arkady will give a bit more color on that at a later stage.
In terms of profitability, this is a business that we can grow profitably and we anticipate medium-term EBIT margins to range in the sort of 20% to 30% range. So this will be supported by our AI cloud business reaching scale. We also have -- we have an important differentiator, which is the full stack and particularly the software at the top end of the stack.
And the software is -- it's a very important part of our business model, what makes us attractive to clients, sticky to clients, and ultimately, we think it's what's going to allow us to achieve higher margin, create higher-margin business models and really service customers in different ways and a wider range of customers that allow us to basically get increase that the effect of revenue per GPU. So not just the GPU as a service model, but it's a broader range of sort of revenue sources. So we also -- I think it's also important to note that we actually take a very conservative view on depreciation.
So actually, with all of these numbers, we apply here a full year depreciation schedule while others, I think, use typically use more of a 5- or a 6-year depreciation schedule within our industry. Longer term, I think while we see 20% and 30% is the EBITDA margins in the midterm, longer term, we could go beyond that. I think there is a number of scenarios as we continue to scale up and expand the business where we could go well north of 30% in the longer term. So just to wrap up, we're building AI infrastructure successfully and at scale.
I think as you've heard Arkady talk about on previous calls, fundamentally, we think our differentiation and what sets us apart really comes down to two things: Above all, it is the quality of our technology. There's also our access to capital that to allow us to take advantage of that technology and to ramp up and to scale up quickly. So briefly on the technology. We have an amazing team of engineers. We're building amazing hardware, software and services. These engineers, they're really -- they're the best of the best in the industry. It would take years to build a team at that quality, and we're really about to have them. They're building great tech.
We're building out our native AI cloud, and we're expanding the range of AI native customers that we're able to service -- and really, it's the AI cloud that we build, it goes well beyond what you might call a classic bare metal offering. We're building out strong partnerships, as Daniel talked about within the ecosystem and all of this is allowing us to reach and service a broader range of customers. In terms of capital, so we think that we're actually in a very favorable position and actually quite a unique position among Neoclouds to really finance this future growth in an efficient way.
So we have significant capital funding potential for the core business, which actually comes from our various ownership and equity stakes of noncore businesses. And these -- the monetization of these potential equity stakes can really translate efficiently into bottom line results of the core business. So just to give some examples of what we're referring to here. You may have seen ClickHouse in the news lately, we have a 28% or roughly 28% minority stake in the business, and this can potentially be a very important source of future capital.
So according to some of the recent press reports, there's a fundraising round underway at the moment, which would potentially value the business at around $6 billion, and we believe that business will continue to perform extremely well and grow significantly from current levels. We have Toloka and we're extremely pleased to announce that they arrive at the means of strategic investors from Jeff Bezos and Mikhail Parakhin coming into the structure. And we think that their investment involvement in the business is really going to help Toloka to scale up among the top tier of AI data companies globally, with great backing from these investors.
And but what's important for us, we think this is great for Toloka, it's great for us as well because we and for our shareholders because we maintain a significant majority economic interest in Toloka. So we'll benefit from all the upside. We also have Avride. It's one of the best autonomous vehicle teams in the world. They're doing great this year. In the last quarter, we've announced they've entered into partnerships with players like Uber, Hyundai, Grubhub, Rakuten these partnerships really underscore, I think, the strength of tech and the team and places them really among a select group of global leaders in that field.
A brief note on Avride, as we've mentioned previously, we're actually -- we're in fairly active talks with potential third-party investors and strategic investors that could come into the business that we believe would really help them to scale up even faster and really build their businesses. But again, while we would always look to retain significant economic interest in the upside. So it's really our ability to use these assets in these states, which gives us a really a very attractive source of financing.
So when we think about the future of billions and dollars of investment in the core business, will be able to very effectively monetize these businesses and to grow extremely efficiently in a way that really minimizes any dilution to existing shareholders, while allowing us to stay very disciplined in terms of debt. So just -- again, just to sort of summarize, once we achieve adjusted EBITDA profitability, our strong balance sheet and continued low interest burden, we believe will allow revenue growth to translate very efficiently into bottom line results. So I'll stop there, and Neil, I'll hand back over to you for Q&A.
Neil Doshi: [Operator Instructions] Great. Let's start with our first question. You just guided to midterm revenue and margins. What do you mean by midterm? And what are the building blocks to get in there. Roman?
Roman Chernin: Yes, Thank you, Neil. Our base case plan calls for several billion dollars of revenue in the midterm over the next few years. While our base case assumes that we grow our capacity to support this type of revenue growth from 2025 levels of 100 megawatt. Our ambition is to grow much larger and much faster. For that, we are building a data center pipeline to provide scalability to more than 1 gigawatt of power. Also, as Tom said earlier, how quickly we get there will be a function of how fast we can scale and capture demand through more enterprise-level customers and longer-term contracts.
And also a few words about the margins, our target of [ 20% to 30% ] EBIT margins is a function of two factors. Just greater mix of workloads where we can run our GPU fleet with a high level of utilization for a longer period of time. Second, I would say is software. We put a lot of efforts into developing our software, which allow us assumed contribution from high-margin software and services revenue over the long term. And worth to mention, in addition, we take a more conservative view on depreciation where we use a full year depreciation schedule while others use 5 or 6 years.
So when more close shift to inference, this will come to higher margins for us as well.
Neil Doshi: Great. Thanks, Roman. Roman, maybe you could take the next question, too, which is around Q1 ARR was ahead of what you discussed on the last earnings call, what really drove that strength? And how are you feeling about the full year?
Roman Chernin: Yes. As I said, overall demand environment in Q1 was strong. Customers who want to access to GPUs. And we see that demand strengthen each month. Customers, I believe, recognize the value of our infrastructure and software. We were able to provide reliable and scalable service. Our software enables customers to start accessing clusters with thousands of GPUs just within a matter of days and not weeks. And we heard from some of our core customers' recognition of that. We also saw the benefits of our sales team ramping up and especially the investments in our presales and solution architects and customer success team. Now we can provide 24/7 white glove support.
And I believe it's like significantly contributed to improve our sales process and obviously, wholesale customer success. Our brand awareness is also growing. We put a lot of effort there. And also thanks to industry recognition. For example, SemiAnalysis cluster remarks Gold status that Arkady mentioned contributed and we see that our pipeline comes more deep and strong. Also, we see that our approach to bring the newest chips online as early as possible like not responding to specific contracts, but in this like more cloud manner. And our flexibility to provide the real cloud terms, the combination of pay-as-you-go or reservations of different land is paying back as well.
A good example when -- was the DeepSeek moment in February, we could very quickly respond to the big demand to NVIDIA H200 chips that we have deployed in more volumes that maybe some other players at that moment. All that resulted in a strong growth and we reached a record high number of our managed customers during Q1. Few words about like the full year, we are seeing -- we continue to see a solid start of Q2. The demand remains robust. April annualized run rate revenue of $310 million confirms that, and we are seeing the strong momentum continued to May.
And in the second half of the year, we expect to bring Blackwells for customers which should provide further support to our revenue profile and gives us confidence that we can deliver on our guidance of $750 million to $1 billion annualized revenue by the end of Q4 2025.
Neil Doshi: See -- so you discussed getting to positive adjusted EBITDA margins by the end of the year. Can you provide an update when you think that will happen? Maybe we'll go to Tom for this.
Tom Blackwell: Yes, sure. So I guess, I touched on this briefly in the presentation, but just to pick up. So first of all, achieving positive adjusted EBITDA is an important milestone for us, and it really highlights that we're very focused on getting to profitability. And as we set out in some of the midterm targets, we believe this is a business that can post really strong profitability going forward. So specifically, again, with respect to adjusted EBITDA, we intend to reach positive territory at some point during the second half of the year.
One thing I would note is actually if we break it down and look at the core infrastructure business, then we'll get -- we'll move in faster there and we'll get to positive adjusted EBITDA probably sometime in the third quarter. The next goal will obviously be to then focus on reaching positive adjusted EBIT, and we're working full steam towards that goal.
Neil Doshi: Great. And Tom, maybe sticking with you, there's a question here about CapEx. We've raised the CapEx guidance. Can we provide any update on the reasons for this.
Tom Blackwell: Yes. So I mean -- so look, our primary business model is predicated on building capacity for demand. And we've been very fortunate to be able to finance a lot of our CapEx with our cash on hand up until now. So looking at this year, so first of all, in terms of the kind of the specific guidance for this year. As I mentioned earlier, we had some CapEx spend that had been keyed up for the end of the fourth quarter last year, which got pushed into the first quarter, there was -- that's just down to sort of typical quarter-to-quarter fluctuations based on sort of various factors related to data center build-outs.
But we -- again, we want to be opportunistic. We view the targets that we've set out as base cases, but there are a lot of scenarios where we can do more and go more aggressively. And where we see an opportunity to do so, that's in a way that's value accretive to our shareholders, we want to be able to do so. So for example, again, when we see an opportunity to ramp up capacity faster around existing demand that we can see. We want to be able to do so.
So again, the Israel data center is one that we haven't initially had in our road map, but it was an opportunity that came along and we thought that was a good one for us to go for. And so we're very pleased. And I think probably at a late stage, I'll ask perhaps Roman to give a bit more color around that. But it's a great market, and we're very excited to be getting into that market. So -- and that sort of -- it's a new geography on top of some of the previous geographies that we've been focused on.
So -- but in terms of those sort of incremental data center build-outs like Israel, from a revenue standpoint, we'll be investing in putting the capacity in place later this year. And the revenue will be -- we'll see more contributing to 2026, keeping us very much on that path towards the sort of mid-single-digit billions of revenue that we spoke about earlier in the presentation.
Neil Doshi: Great. Maybe keeping with the theme of CapEx, Tom, I know you touched a little bit in the slides on how we're going to finance our future growth. So how do we expect to finance the CapEx expansion, given that the cash balance now is below what we're planning to spend.
Tom Blackwell: Yes, sure. So -- So just to kind of recap. So in fact, if we look at Q1, so we've already spent $544 million in the first quarter towards that overall $2 billion CapEx, so talk you about. And -- at the end of the quarter, we had $1.44 billion of cash remaining on the balance sheet. So we feel good about our ability to finance that CapEx. And also, again, I would just come back to this point.
So kind of going beyond that and looking further afield, the equity stakes that we have in these noncore businesses again, we believe will provide us a very significant funding sources for future against it, we can continue to ramp up and scale beyond this year in ways that really minimize again dilution to shareholders and allow us to stay very disciplined at me I think it's a really important point. Obviously, as a public company, we have access to more traditional funding sources, and we will look at those from time to time when we believe they make sense and value accretive.
And I guess, just another point that I would make is that, again, given that we have -- right now, we are very lucky -- there was no debt, and we anticipate continuing to have relatively low levels of debt. So that means that we're going to be able to reinvest a significant amount of our revenue back into driving value creation in our core AI infrastructure business.
Neil Doshi: Great. Looks like we're getting kind of a higher level of question here is just around future growth. So maybe Arkady, maybe you can tell us where you're seeing the growth in the future in this business?
Arkady Volozh: Well, our current customer base, the majority of them. They are all those new AI companies that emerged in the recent couple of years and actually, they can continue coming to the market every month. They are very advanced in the technology -- we call them AI native. They are smart. They are very fast growing, actually we are like them and they like us. Those companies are usually venture-backed and understandably, the majority of them are in the U.S. That's why we are so focused on building our data center capacity in the U.S. right now.
All the growth you currently see in Nebius all this quarter results, year results, most of our revenues, they come mostly from this market now. The second very promising sector, which, by the way, is not in our revenue yet is all those follow frontier AI labs, big customers. We haven't tapped this market yet, but we're doing a lot to be ready to serve them and help them to grow faster. In order to serve this -- those customers we will need much more and much bigger data centers. And we are actually getting ready for this. This is why we have our a pipeline to get to this more than 100 gigawatts of data center capacity.
So we are not there yet, but we will be there soon. That's the second sector. The third sector, maybe the most promising sector in terms of growth is the enterprises. AI technology today has reached just a small fraction of the corporate clients. Everybody talks about it. But at the same time, this is where the world expects the majority of added value, which AI will be creating. And by the way, our full stack solution and higher-level services, which we provide is very much relevant exactly here. This market is much more global by nature because the real industry really enterprises are everywhere in the world, in many countries.
And this is where our European and global infrastructure presence will be in high demand, I think. And by the way, I would say of just several hyperscalers, we are one of very few AI cloud providers that actually can serve corporate clients in multiple geographies. So this is the most promising sector for us, I believe, in the near future. And there is also the fourth sector, which we're also watching carefully its potential market in national AI projects. We hear more and more about them. And here, again, we see a huge opportunity for us, and we plan to build our AI factories in different countries and geographies in U.S., in Europe and Middle East and actually elsewhere.
And -- but all in all, those four sectors and the whole market is just the beginning for AI technology or AI business. AI infrastructure will be in high demand in many indices and in many geographies and Nebius will be there to serve this demand.
Neil Doshi: All right. Let's see. Next question, how does Toloka deconsolidation impact your business? I can probably take this. So Toloka is our AI data solutions provider. They've done a really good job in terms of building their business. They have high-quality customers like Amazon, Anthropic, Microsoft, Poolside, Recraft and Shopify and we believe Toloka really has great growth prospects. And this is really validated by their investments from the Bezos Expeditions and Mikhail Parakhin, the CTO of Shopify. Given these growth prospects, we're happy to retain a significant majority economic stake in Toloka. And as we -- now our voting shares dropped below of 50%. We will be deconsolidating Toloka.
Since the transaction closed in May, we will be updating our financials and guidance ex Toloka in our Q2 earnings report. All right. Looks like we have a few questions around infrastructure. Maybe we can start with Andrey, on this one. Andrey, can you provide us an update on your capacity expansion plans for this year?
Andrey Korolenko: Sure, Neil. We are aggressively building and acquiring generally our data center capacity and expanding the footprint, what we announced in February was the New Jersey data center, and this is a build-to-suit project built by partner according to our specifications and design and this is quite important because of -- it helps us to deliver the efficiency of the power usage and the cooling efficiency, again, what we demand from ourselves. We expect that the first capacity in New Jersey to be operational in late summer, and then it will continue to roll out on periodical basis in conjunction with the demand. We also announced the Kansas City.
And the first part of it is already fully operational, and that was first -- was the last deployment of the Hopper GPU generation for us. And at the moment, there are Blackwells deployed in the second part of the Kansas City, and they will be available on the platform a bit later in second quarter. We also announced and actually launched Iceland, which is fully operational at the moment and our build-out in Finland is going quite well. And exactly on track, and we expect the first phase of the expansion will be operational in late in Q3, and the second phase will be -- will follow closer to the year-end.
And expect that operationally, we will have over 100 megawatts of capacity deployment this year.
Neil Doshi: Great. Thank you, Andrey. So can you share more about the new site in Israel and can you discuss your expansion strategy beyond the EU and U.S. Tom, I know you kind of alluded to this, maybe Roman, you can talk a little bit more about this and elaborate on Israel.
Andrey Korolenko: Yes. Thank you, Neil. So first of all, one thing in Israel is means for us that we'll opened in one more market. We said like we were very much and continue to be focused to scale our capacity in Europe and U.S., but we don't want to be limited only by those markets. So first of all, it's a new market for Nebius. Israel has great AI market with a lot of go AI native start-ups, enterprises and R&D centers of the global corporate. This is a great gate for us to a lot of customers. But what's also important that this is our first but probably not the last step in supporting national AI factories.
And we hope to support and build more national AI factories around the world. And we'll look to see how we can plug into those initiatives across Europe, Middle East and the rest of the world. So we open like opportunistic and looking to this market as a carry mentioned.
Neil Doshi: All right. Can you share an update on your GPU rollout plan for this year, Andrey?
Andrey Korolenko: So aside from the Israel capacity, we're very much on track with the rollout that we planned earlier. We -- this year, we deployed -- in Q1, specifically, we deployed the Hoppers generation H200 specifically. At the moment, we are rolling out the Blackwell, as I already mentioned and they will be available in the platform shortly. And we also start to deploy the Grace Blackwell family. So the GB200 family. We expect that in Q3, the Blackwell Ultra generation will start with the first deployment. And the majority of this year, we will be actually deploying the Blackwells.
Neil Doshi: Great. Thank you, Andrey. Andrey, maybe sticking with you again, and we have a question around regulatory issues around tariffs -- any thoughts on kind of the impact of tariffs on our data center expansion plans? And also just how are you thinking about the cost to our business?
Andrey Korolenko: Good question. Yes, there is definitely was in these certain -- I mean, it's not clear around global tariffs but based on where we stand now, we don't believe that the current status would result in major changes to our expansion plans. We also believe that we can navigate through the current environment of tariffs without significant impact to our costs. I would, however, it's just a very dynamic situation and things can change quite quickly as we already saw during the Q1. And we are actively monitoring the situation.
Neil Doshi: Thanks Andrey. So it looks like we have some questions about customers. So maybe I'll give this to Daniel. Daniel tell us more about Nebius' customers? And why are they choosing Nebius over other providers?
Daniel Bounds: Great. Thanks, Neil, and thanks for the question. First of all, our customers choose us because we offer high-performance, resilient and scalable alternative for other cloud providers, but what really makes a difference. Our differentiation lies in our deep expertise and hyperscale infrastructure and our role as a hands-on practitioner along with our customers. So we're not just another platform vendor. What this does is ultimately enable us to drive a greater return for every AI dollar our customers spend. Some examples of that, in Q1, we saw great momentum and new wins in vertical industries like health care and life sciences, like media and entertainment and financial services. One customer of ours Captions is a leading AI video platform.
They partnered with us to scale GPU training for the next-generation audio to video model, Mirage. And so by leveraging our infrastructure, they accelerated their time to market. They empower their creators to deliver emotionally compelling and story-driven content and ultimately push the boundaries of AI-powered storytelling. So a great example in the media and entertainment industry. Another example would be Quantori. They're a top biopharma partner of ours. They use Nebius to build a framework for 3D molecular generation. And ultimately, by increasing the amount of molecules that they could model, they achieved chemically valid structures and really enabled faster, scalable R&D and accelerate the innovation that they have in drug and materials discovery.
So really monetizing and unlocking the power for AI for those customers. And that's just the beginning. So looking ahead, we're doubling down on the verticalization of AI solutions across the enterprise. Customers ranging from retail to robotics as they embed AI deeper into their core operations, we want to be right there with them to drive measurable results.
Neil Doshi: Great. Thanks, Daniel. We're getting a few questions around contracts. So maybe, Roman, can you tell us a little bit more or give us an update on what type of contracts we're seeing in the market, maybe in terms of structure and duration?
Roman Chernin: Yes. Thank you, Neil. The first thing I want to highlight that the benefits of coming to Nebius is our flexibility that allow us to support and grow with native AI tech startups and meet their needs and flexibility. Contract plans tend to go from several months into a year and beyond. In addition, as we are just starting to bring the fleet of Blackwells that opening up more discussions about longer-term contracts, new generation, high interest and GB200s and GB300s, we expand this to drive more demand and give us flexibility on the types of the contracts, we will be able to secure.
Neil Doshi: All right. Quickly on NVIDIA. Can you talk a little bit more about the NVIDIA relationship? How is that progressing? Daniel, I know you shared some thoughts in your slide, but anything more you want to elaborate there?
Daniel Bounds: Yes. I think between Andrey and I, we've covered a lot. I'll do a little bit of reiterating here just in case anybody missed a few details. Obviously, we have not just a tight collaboration but a long-standing collaboration with NVIDIA, they have been an investor and a capital raise last December with us. And we have a very robust go-to-market that we've built with them. In Q1, in particular, like I mentioned before, across the Blackwell family, but particularly as we announced the Blackwell Ultra AI factory platform, we are going to be one of the first vendors to stand at the GB300, NVL72-powered instances.
And we think this is going to be a real game changer in the market, and we're right there with NVIDIA as those roll out. We also talked about the ecosystem and NVIDIA Dynamo. This open source inference framework. And so as we continue to roll out the scale and the variety of the AI factories that are needed in the market we're right there with them in real time.
And then the other thing that I mentioned earlier, but it's still important is -- the ability to stand up a cloud based off of NVIDIA architecture that actually performs to spec and delivers at least $1 for every dollar invested, if not more, is what the NVIDIA Cloud Partner program is all about and what the reference architectures that we're 1 of 5 partners really delivers for customers. It's a clear validation of our technical leadership and that just rolls over into the marketplace that they're standing up with DGX Cloud Lepton and lots of other opportunities, whether it be with the startup community or expanding out and helping enterprises monetize AI.
We've been very much in lockstep with NVIDIA for a very long time and look to have a very bright future.
Neil Doshi: Great. We're definitely getting some questions around our software stack. And we also get these questions quite a bit when speaking with shareholders and investors. So it seems like we've launched a lot of products in Q1 on the software side? And how does our software stack compare with our competitors? And what really were kind of the biggest launches, Andrey, maybe you want to take this.
Andrey Korolenko: Yes, Neil, we started Nebius with a clear goal to build a full-stack AI cloud. That means from the day 1, our focus has been to create a software stack that is specifically built for the AI workloads. And our stack is basically three layers. And the first one is the layer that manages our hardware. And since we design our hardware, we also can offer the tools to monitor its performance and optimize the usage. The second layer is that we build a full cloud platform. It's pretty similar to the big hyperscalers. It's a virtualized environment, so customers get more flexibility and better stability overall.
And the third layer, there's an application where you can deliver the pre-configured third-party AI tools and that simplify the entire the AI development process. So we shipped quite a lot of products in Q1. I think it's around 50 products across the AI cloud and the AI Studio. Notably as I was saying in the -- on the opening, we launched the Slurm-based cluster upgrade and such as automatic recovery, proactive system checks, issued detection before the actual jobs fail. And these changes significantly reduce the downtime for the customers and improves the time to recover on our side. We made a lot of efforts and made a lot of improvements on the -- our object storage.
Now just we boost the speed of read and write for compute node that again ensures that data sets can be assessed and say quickly, quickly enough for the training runs, and that improves the time to results during training. Also the partnership with the leading storage companies helps us to provide more flexibility to our customers. And as I said earlier as well, the integrations, we believe that it's very important to integrate with the existing AI platforms such as Metaflow, dstack and SkyPilot, and that just allows the customers to bring their jobs and their tools, existing tools on us with minimal friction.
Neil Doshi: Great. Thank you, Andrey. What is it -- so in terms of financial performance, tying software to our -- back to our finance, how does that drive revenue and margins. Tom, maybe you can take a stab at this question.
Tom Blackwell: Yes, sure. I mean -- so I think, look, it's important to understand that our software stack, it's a critical part of the offering, right? So many of our customers rely on the stack to help them manage and execute their workloads. And again, we're relatively unique in having that full stack offering within -- among the Neocloud in our space. Makes -- it also just makes us very sticky with customers. So some of the things that, for example, I think it allows us to do -- it allows us to provision large clusters of GPUs quickly, so customers can start their jobs without waiting.
We've created various tools to help them manage their data models, and track their progress. And so I mean, when we think about revenue contribution, I mean, I suppose the revenue contribution that you could break out as of today, it's relatively small, but it's really very much an added value part of the offering. And again, it just drives overall customers coming to us and drive the overall revenue and so it's about of the broader offering. And we're going to be very focused on building it out, building our use cases and continuing to make the products more sticky with customers sort of going forward.
So I think over time, it can be -- probably it can become even the most significant standalone driver of higher-margin revenue, but it's really about helping us access a wider range of customers offer higher-margin services, higher-margin products, keeping them on the platform. And so it's an important part of our overall revenue growth kind of going into the midterm.
Neil Doshi: Great. And coming back to the funding question, just maybe a little bit more pointed here. So the question is, you will need funding for this year, but also for the coming years? And how are you thinking about financing options, I know, Tom, you talked a little bit about that on the slides and kind of in the CapEx question, but -- anything else you want to kind of reiterate on this point.
Tom Blackwell: I'll just actually kind of reiterate quite briefly on this. Again, it's important for us as we think about funding the growth, we want to do this in a way that minimizes shareholder dilution and allows us to be prudent in terms of debt. We're in a great position to do that with the cash we have on the balance sheet and with the potential to monetize these various stakes in the noncore businesses. Well, of course, in due course, we'll look at -- we'll be considering other more classical opportunities to the capital markets, and we'll update us on when we have more.
But again, we feel very good about our ability to continue to fund this growth based on the available sources of capital that we have to us.
Neil Doshi: Great. There's a question on the other business, specifically Avride. You said you may explore strategic options. Can you maybe share a little bit more about why you're excited about Avride and maybe what those options could potentially be, Arkady?
Arkady Volozh: Well, yes, we covered it several times. There is just a few independent autonomous vehicle platforms that can compete in on the U.S. market today. Definitely, the market is excited about what Waymo has achieved, sees it. And one of very few players that can actually build a platform comparable to that is like we didn't impact. And as you can see, other market players, big market players actually recognize this, look at the recent announcement of Avride partnerships with Uber, with Hyundai, with other big players. And as we said, they need to grow. They need to grow much faster. It's yet another capital intense businesses in our portfolio, and we are in active discussions.
We can confirm that we are in active discussions with potential strategic partners. And who can actually really help to drive growth to this ambitious project.
Neil Doshi: Maybe one last question, a clarifying question. Can you explain exactly what you mean by midterm? Tom, maybe you want to take this one?
Tom Blackwell: Yes, sure. So again, just to recap on that. So as we look into the midterm, we really believe this business can scale up quickly and achieve sort of single -- mid-single-digit billions of dollars of revenues. So what we mean by midterm, I mean, with effectively a few years. And -- but at the same time, as we've tried to sort of outline, we will -- we're working very hard to go as aggressively as we can and we'll get there -- to get there as soon as possible. And we have I think kind of sort of frame how we're thinking about the future growth.
And so again, a lot of our existing revenue sort of forecast stems around this kind of AI native customer base. But I think Arkady set out the other sort of incremental sources of growth around the enterprise customers, the big sort of the big labs and so on and so forth. So in our mind, we think midterm is a few years and we'll go as quickly as we can.
Neil Doshi: Great. Thank you, everyone, for participating on our first quarter 2025 earnings call, and we will see you again on our Q2 call. Thanks.
Arkady Volozh: Thank you.
When our analyst team has a stock tip, it can pay to listen. After all, Stock Advisor’s total average return is 975%* — a market-crushing outperformance compared to 172% for the S&P 500.
They just revealed what they believe are the 10 best stocks for investors to buy right now, available when you join Stock Advisor.
See the stocks »
*Stock Advisor returns as of May 19, 2025
This article is a transcript of this conference call produced for The Motley Fool. While we strive for our Foolish Best, there may be errors, omissions, or inaccuracies in this transcript. Parts of this article were created using Large Language Models (LLMs) based on The Motley Fool's insights and investing approach. It has been reviewed by our AI quality control systems. Since LLMs cannot (currently) own stocks, it has no positions in any of the stocks mentioned. As with all our articles, The Motley Fool does not assume any responsibility for your use of this content, and we strongly encourage you to do your own research, including listening to the call yourself and reading the company's SEC filings. Please see our Terms and Conditions for additional details, including our Obligatory Capitalized Disclaimers of Liability.
The Motley Fool has positions in and recommends Nebius Group. The Motley Fool has a disclosure policy.