TradingKey - Amidst the AI infrastructure land grab where hyperscalers are taking center stage and capital is flowing, Nebius Group (NBIS) is quietly undergoing a high-speed pivot that will make it potentially the most under-the-radar AI infrastructure disruptor in 2025. After a refashioning in 2024 followed by a formal launch in July, Nebius grew this past quarter its anchor AI infrastructure ARR to a record $249 million through March 2025, an increase of 684% year over year, while April ARR reached $310 million and full-year guidance of $750 million to $1 billion.
Source: Nebius Q1 2025 Earnings Results
It's not exponentiating expansion; it's category-setting velocity in one of the most capital-heavy spaces on the planet. What distinguishes Nebius is not merely topline speed but also architectural approach. It’s developing a full-stack, vertically integrated AI infrastructure platform with in-house software orchestration, direct hardware provisioning (through Blackwell GPUs), and extensible platform layers through MLOps, inference services, and dev tooling.
In an environment ever more influenced by regional compliance mandates, per-dollar optimization of performance, and fast model iteration, Nebius is the first hyperscaler alternative to Big Tech, capable of competing on raw speed but with a different model of customization and cost model. And still, the stock is priced low compared to what it's building: an AI-first infrastructure platform with near-zero debt, more than $1.4 billion in cash, and strategic monetization levers in equity investments such as ClickHouse and Toloka.
As the company reaches EBITDA breakeven in H2 2025, investors need to wonder: is this a specialty cloud provider, or the initial innings of a multi-billion-dollar infrastructure engine designed for next-generation AI-native software?
Source: Nebius Group Investor Presentation
Behind Nebius’ competitive edge is a software-driven AI cloud stack that is much more extensive than GPU-as-a-service. Nebius is expanding in five geographies, with purpose-designed facilities and colocation expansion that is rapidly adding compute density.
Source: Grand View Research
Nebius had, by Q1 2025, moved from a single facility in Finland to active sites in Iceland, Kansas City, and France, with New Jersey activated by late summer. Through year-end, the company will be running 100 MW of contracted power, already on par with smaller hyperscaler zones.
In contrast to most neocloud competitors based on repacked bare-metal or third-party MLOps layers, Nebius vertically integrates infrastructure management, orchestration, and ML workflows in its Nebius AI Cloud platform. It encompasses Slurm-based cluster management, Kubernetes automation, MLOps with MLflow, and topology-aware training features. Its object storage optimizations of up to 10 GB/s read speed minimize time-to-result in massive training datasets, while open integrations (Metaflow, SkyPilot, Hugging Face) provide easy onboarding with minimal friction.
Source: Nebius Group Investor Presentation
The company's developer-centric strategy is supported by SDKs, expanded self-service GPU allocations, and inference-as-a-service (AI Studio), now with over 60,000 registered developers. AI Studio is in the early stages of monetization, but expanding catalog depth through DeepSeek R1, Google Gemma, and customized tuning tools is set to make it a high-margin driver long-term. Impressively, this layered strategy follows the playbook of AWS in the 2010s, but in a post-GPU shortage environment with platform-native design.
Although Nebius is not a household name alongside AWS, Azure, and GCP today, credibility jumped forward with recognition in SemiAnalysis's Gold Tier of GPU cloud providers, ranking alongside Azure and ahead of GCP, CoreWeave, and Oracle. Why? Performance-per-dollar advantage, profound GPU orchestration know-how, and ops mastery supporting provisioning times <5 minutes at scale. This puts Nebius in an exalted strategic club, a "new hyperscaler" free of legacy bloat and compliance anchors.
Additional credibility followed with Nvidia designating Nebius a Reference Platform Cloud Partner, one of only five in the world. Through the end of the year, Nebius will be one of the first vendors providing Blackwell Ultra AI Factory nodes (GB300 NVL72). This is not a roadmap into the future; production is already underway on early B200 units with Blackwell instances already slated for Q3 public availability in New Jersey.
Concurrently, Nebius participated in Nvidia’s Dynamo effort, an open-source platform fine-tuned specifically for large inference, indicating high technical affinity and favored access to Nvidia’s product cycle. Customer logos and use cases validate the traction. From CentML's inference-optimized software stack to Prima Mente's life sciences models and Captions' video workflows, Nebius is being embraced by AI-native businesses that are looking for reliability, cost effectiveness, and flexibility in terms of customization.
It is a unique dual-mode provider due to supporting both compute-intensive training and latency-sensitive inference. Despite most AI clouds in pursuit of frontier labs, Nebius is constructing bridges systematically to enterprise. Its plans involve ISO/IEC 27000, HIPAA, SOC 2 certifications, and a recently ramped salesforce with increasing reach into the media, healthcare, and biotech businesses.
This makes it well-positioned to leverage the $50 billion+ latent demand for regulated, high-compliance AI workloads, a segment in which AWS leads today but where Nebius is structurally advantaged through software-first design and modularity in the deployments.
On the face of it, Nebius' Q1 2025 adjusted EBITDA of -$62.6 million might appear like startup burn. But under the hood is a capital efficiency tale in need of institutional attention. Nebius increased revenue 385% YoY to a run rate of $55.3 million, with 175% QoQ ARR growth to $249 million. Significantly, the cost of revenue relative to sales has decreased from 78% to 53% YoY, exhibiting early signs of scaling of the gross margin.
CapEx totaled $544 million in Q1 alone, primarily attributed to Blackwell, H200, and H100 installations, but with $1.44 billion in cash and no debt, Nebius is not in need of equity dilution or over-leveraged debt. More interesting is the embedded optionality in non-core asset monetization. Its 28% holding in ClickHouse at an estimated value of $6 billion is potentially worth hundreds of millions in non-dilutive funding. Toloka, funded by Bezos Expeditions and Shopify's Chief Technology Officer, will be deconsolidated starting Q2, further enhancing focus on the core while maintaining upside.
On an EBITDA margin basis, adjusted EBITDA margin improved from -622% to -113% YoY, an astonishing 509 bps in one year. All this despite extreme GPU depreciation, up 453% YoY, due to Nebius employing a conservative straight-line approach of four years (vs. hyperscalers' typical 5–7 years). This skews near-term margins but allows the company to reap operating leverage in H2 2025, as current capacity is captured. Guides to $750 million–$1 billion ARR through December and H2 EBITDA breakeven, on a trajectory towards 20–30% adjusted EBIT margins by 2026.
Source: Nebius Q1 2025 Earnings Results
Nebius is targeting mid-single-digit billions in long-term revenue while keeping a sub-5% net debt ratio and strategic balance sheet optionality in place. This is not only fast growth but also structurally sound, a rare and potent mix in a pre-stage infrastructure company.
On a 2025 revenue guidance of $500–700 million, Nebius sells well under peer comparables. Even using the low-end $500 million topline and a 10x EV/revenue multiple (a discount to CoreWeave’s ~15x in private markets), Nebius is worth a mere $5–7 billion, not captured in the current public market value.
Source: Nebius Q1 2025 Earnings Results
But that still doesn’t account for embedded optionality. Adding equity stakes, ClickHouse ($1.7 billion value at 28% stake), Toloka (deconsolidated but held major economics), and Avride ($2 billion peer comps), the sum-of-the-parts NAV is well over $8–10 billion. If the core AI segment attains $1 billion ARR by December as expected and is converted to 20–25% EBITDA in 2026, Nebius would be capable of producing $200–250 million in EBITDA in the next year, deserving of a potentially $10–12 billion valuation on conservative 40–50x EV/EBITDA multiples.
Multiple expansion is supported by the capital cycle. With ~$2 billion in CapEx in the plans through 2025 and 100 MW in place by the end of the year, the infrastructure platform is poised for 5x–7x utilization growth in a non-linear CapEx ramp, replicating hyperscaler economics in the post-scaling regime. And in contrast to hyperscalers, Nebius is not weighed down by public sector contracts, geopolitics, or inflated margin expectations.
The asymmetric bet. If Nebius is successful in enterprise adoption, ranging from AI in healthcare to regulated industries, and continues to be software-centric in its focus, then it could become a full-stack AI platform with 10x in the next five years in terms of revenues.
Contrasting with the bullish path, Nebius is fraught with real danger. Execution hazard is high, most notably in terms of enterprise penetration, since expansion past AI-native startups means that certification, compliance, and field sales discipline are necessary.
Further, deconsolidation of Toloka, although strategic, decreases near-term revenue clarity and may cloud growth insight. CapEx burn is funded internally and is aggressive in nature. With $2 billion in 2025 spending alone, provisioning missteps or underutilization might carry the EBITDA inflection point into the future.
Competitive pressures from hyperscalers, most notably AWS's regional go-to-market strategy and GCP's GenAI capabilities, might compress prices as adoption scales. Finally, although the company established a good reputation in terms of engineering, sustaining performance equivalency when Blackwell is commodized will involve ongoing software-layer differentiation. Otherwise, Nebius will be relegated to a commodity GPU lessor, not a platform company.
Nebius Group is not simply a cloud with rapid AI growth. It's perhaps the first dev-native, full-stack AI infrastructure platform developed in public markets. In potentially reaching $1 billion ARR, having deep Nvidia alignment, and platform-level vertical integration, Nebius is building not only GPU infrastructure but also monetization flywheels, balance sheet resiliency, and enterprise-class scaling.
This is a once-in-a-lifetime opportunity for institutional investors: a subscale AI infrastructure company with hyperscaler DNA, playing ahead of where the market is, and priced at less than intrinsic value. While the market continues to label Nebius a neocloud in a niche, structurally, it is starting to look like a next-generation operating system for AI.
Get started