Image source: The Motley Fool.
Tuesday, Feb. 24, 2026 at 8 a.m. ET
Need a quote from a Motley Fool analyst? Email pr@fool.com
DigitalOcean Holdings (NYSE:DOCN) reported a return to accelerated growth, highlighted by 18% year-over-year revenue growth and over $1 billion in annualized run-rate revenue, both driven by rapid AI and DNE customer expansion. Customer ARR from AI reached $120 million, rising 150% and now comprising 12% of total ARR, with these AI workloads also accounting for 70% of AI revenue through higher-value inference and general cloud products. Management set a full-year 2026 growth target of 19%-23%, projecting an exit rate above 25% for Q4 and laying out a path to 30% year-over-year growth in 2027 through onboarding of 31 megawatts of new data center capacity. The top revenue cohorts—especially million-dollar customers—demonstrated zero churn and triple-digit ARR growth rates, while DigitalOcean Holdings introduced a new AI customer revenue metric and continued to drive significant product innovation focused on inference capabilities. The current product mix and margin structure enable strong free cash flow, though short-term margin compression is expected as new capacity comes online, with leverage temporarily rising before declining as utilization ramps.
Melanie Strate: Thank you, and good morning. Thank you all for joining us today to review DigitalOcean Holdings, Inc.'s fourth quarter and full year 2025 financial results and an investor update. Joining me on the call today are Padmanabhan T. Srinivasan, our Chief Executive Officer, and W. Matthew Steinfort, our Chief Financial Officer. Before we begin, let me remind you that certain statements made on the call today may be considered forward-looking statements, which reflect management's best judgment based on currently available information. Our actual results may differ materially from those projected in these forward-looking statements, including our financial outlook.
I direct your attention to the risk factors contained in our filings with the SEC as well as those referenced in today's press release that is posted on our website. DigitalOcean Holdings, Inc. expressly disclaims any obligation or undertaking to release publicly any updates or revisions to any forward-looking statements made today. Additionally, non-GAAP financial measures will be discussed on this conference call, and reconciliations to the most directly comparable GAAP financial measures can be found in today's earnings press release, as well as in our investor presentation that outlines the discussion on today's call. A webcast of today's call is also available in the IR section of our website.
And with that, I will turn the call over to Padmanabhan T. Srinivasan.
Padmanabhan T. Srinivasan: Thank you, Melanie. Good morning, everyone, and thank you for joining us. We had a fantastic quarter and a very strong finish to the year. And I am excited to share the details with all of you. We ended the year with 18% revenue growth in Q4, reaching $901 million for the full year. We delivered $51 million in incremental organic ARR, the highest in the company's history. Our million-dollar customers reached $133 million in ARR, growing at 123% year over year. We maintain financial discipline and strong profitability with 42% adjusted EBITDA margins and 19% adjusted free cash flow margins for the year. There is a lot to be excited about.
And given this momentum that we are seeing and the progress we are making against our long-term strategy, we wanted to provide a more comprehensive update today rather than wait for a separate investor day. Our prepared remarks will be slightly longer than usual. We will advance slides from our earnings presentation on the webcast as we go, and we will leave plenty of time for questions.
AI is reshaping entire industries. And we are built for this shift. Software is being disrupted, not by incremental AI features, but by a structural shift to agentic systems operating at scale. Cloud and AI-native disruptors are moving beyond AI experimentation at a breakneck speed. They are deploying agents that reason, act, retain memory, and run continuously. In this structural shift, we see a secular hyperscale-sized opportunity by serving AI and cloud-native companies driving this disruption.
When markets are disrupted like this, there is typically a short window to take advantage of the opportunity, and let me tell you how we are seizing it. First, our top customers are now our growth engine. We have turned what was once viewed as a weakness into a competitive strength. Our top digital native customers, or DNEs, which include cloud and AI-native companies, are now our fastest-growing cohort, and in fact, growing significantly faster than the market on DigitalOcean Holdings, Inc. In a nutshell, scaling our top customers was once a constraint. Today, it is our growth engine.
Second, we are on the right side of software disruption driven by AI. Modern cloud and AI-native companies are going after large markets with this disruptive AI-centric software innovation. They are increasingly choosing DigitalOcean Holdings, Inc. as their natural platform to build and scale their authentic AI software. And when these companies disrupt and scale at unprecedented rates on our platform, we win.
Third, we put the cloud in Neo Cloud. These AI natives need more than just GPU rentals or inference APIs. They need access to optimized AI models, both closed and open source, production-grade inferencing, and a full-stack cloud for their software, all working together at global scale. We deliver all of it in one integrated agentic inference cloud.
And finally, we are building a durable and profitable growth engine. We are investing responsibly while driving balanced growth. Without chasing the GPU training arms race, we expect to deliver 21% revenue growth in 2026, reaching 25% plus growth by Q4 2026, and 30% growth in 2027. We are on a path to being a weighted Rule of 50 next year, on the back of our existing committed data center capacity alone. Put simply, we are accelerating growth the DigitalOcean Holdings, Inc. way.
In December, we crossed a major milestone, surpassing a $1 billion revenue run rate. This is a remarkable achievement for a company that was founded through Techstars in 2012. This success is a testament to our passionate team and the vision of our original founders. I also extend my deepest gratitude to all our incredible customers who have supported us throughout this journey. But what matters more than this milestone is where we are going. We exited 2025 at 18% year-over-year growth, and are on a path to deliver 21% growth in 2026 with an exit growth rate of 25% plus in 2026. We are picking up momentum, and we have outgrown the old narrative.
Let me elaborate. Our top customers are now our growth engine. For our first decade, we built an iconic developer cloud. That foundation still matters, and we have over 4 million active developers on our platform that absolutely love us. Over the last several quarters, we have deliberately shifted focus towards serving our top DNEs and eliminating any reason for them to leave DigitalOcean Holdings, Inc. at their scale. And that focus is working. In Q4, we delivered a record organic incremental ARR of $51 million and $150 million on a trailing twelve-month basis, both surpassing even our peak COVID-era quarters. This record trailing twelve-month incremental ARR was balanced across AI and cloud customers.
ARR from DNEs reached $640 million in Q4, which is now 62% of total ARR, growing 30% year over year. And our DNE NDR reached 102%, continuing to outperform developer NDR. And like I have been reporting for a while now, our largest customers in the DNE cohort are accelerating the fastest. Our $100,000 customers are growing at 58%, our $500,000 customers are growing at 97%, and our million-dollar customers who reached $133 million in ARR are growing at 123% year over year, all well ahead of market growth rates. And NDR also increases meaningfully as these customers scale. Q4 NDR was 102% for our $100,000 customers, 106% for our $500,000 customers, and 115% for our million-dollar customers.
Churn for our $1 million customers was zero in Q4, and has averaged 0% over the last twelve months, which clearly shows that our top customers are now scaling with us and becoming our growth engine. You should also effectively debunk any misconception that our most successful customers will outgrow our platform. Recapping this section, we are accelerating past the $1 billion revenue run-rate milestone and our top customers are driving this acceleration. We are no longer defined just by entry-level developers experimenting on our platform. We are defined by high-growth cloud and AI-native companies running production workloads, scaling revenue, and building their businesses on DigitalOcean Holdings, Inc. Said simply, scaling our top customers was once a constraint.
Today, it is our growth engine.
On to the next point. We are on the right side of software disruption. There is a structural shift happening in software, and DigitalOcean Holdings, Inc. is emerging as a preferred platform for cloud and AI-native companies that are driving this disruption. The last generation of Software-as-a-Service, or SaaS, monetized per user, per seat. Value scaled with headcount. This next generation of AI-centric software monetizes per token or inference request. Value scales with intelligence delivered. As AI model capabilities accelerate, entire categories of horizontal and vertical software are being reinvented. Incumbents are reacting to transformational change by layering AI into their workflows, seeking to enhance their existing software. But AI-native companies are starting from first principles.
For them, AI is not a feature. It is the very engine that defines their product. Every time they deliver value, inference runs, tokens are consumed, and intelligence is produced.
DigitalOcean Holdings, Inc. is uniquely positioned to serve these disruptors, and that is evident in the traction we are getting from leading AI-native companies. We have signed and expanded production workloads with scaled cloud and AI-native companies like Character.AI, OrcaTo, and Hippocratic AI, companies with product-market fit, real revenue, and rapidly scaling demand. Our work with Character.AI demonstrates this clearly. We delivered 100% throughput increase and roughly 50% lower cost per token for Character.AI on our production inference cloud powered by AMD Instinct GPUs at production scale. This is not a lab benchmark. This is on live traffic across tens of millions of customers.
This demonstrates our ability to support production-scale inferencing for leading AI companies with our differentiated performance, cost efficiency, and integrated AI and cloud platform built for inference-first production workloads.
Another AI native with a proven product-market fit is Hippocratic AI, who builds healthcare-focused conversational AI designed to support clinical workflows and patient engagement. Hippocratic AI selected DigitalOcean Holdings, Inc.'s agentic inference cloud to power HIPAA-compliant clinical AI workloads. This validates not just our performance, but our enterprise-grade security and compliance. For Hippocratic AI, we optimized their multimodal deployment on NVIDIA hardware, reinforcing the importance of vertical innovation from GPUs, networking, kernel optimization, cloud integration, and inference software.
These AI natives also scale very differently. While traditional cloud customers may take years to reach $1 million in ARR, AI natives can cross that threshold in months or even weeks. When inference is your product, demand compounds quickly. DigitalOcean Holdings, Inc. is purpose-built for these disruptors. As software becomes more intelligent and AI-centric, we are building the vertically integrated inferencing cloud designed to power the next generation of AI natives, putting us squarely on the right side of this AI-driven disruption. And our agentic inference cloud is catalyzing these disruptors.
Next, let me explain how we are enabling this. We do this by putting the cloud in Neo Cloud. Over the last couple of years, a new category of Neo Clouds has emerged that is largely optimized for one thing: large-scale AI model training. Dense GPU farms, high-performance networking, frontier AI model training workloads. This is an important layer of the AI stack. But serving inferencing is different. As AI diffuses into every software company, workloads shift from training a handful of frontier models to running millions of real-world applications. And real-world AI-centric software needs more than GPU farms. They need compute, storage, databases, networking, observability, security, all working seamlessly together with predictable and transparent unit economics.
Over the past four quarters, we have evolved our agentic inference cloud to meet that reality. We have combined specialized inference infrastructure with our full-stack cloud platform, purpose-built for production AI, while staying true to what defines DigitalOcean Holdings, Inc.: simplicity, open standards, enterprise-grade performance and SLAs, and predictable and transparent unit economics.
A good recent example of this in action is OpenClaw, which recently took the world by storm by demonstrating the power of agentic software, giving us a glimpse into what the AI-centric software future will look like. OpenClaw is an open source AI agent framework that allows developers to run real-world, task-driven agents. When customers deploy OpenClaw on our solution, they need more than just GPUs. Because AI agents are stateful. They reason. They take action. They retain memory. They interact with third-party APIs. All this requires more than just a GPU farm. It takes a full cloud and AI stack working together side by side.
Customers increasingly understand this. As inference is the heartbeat of modern AI natives, it is their primary operating cost, their performance lever, and their competitive moat. Their production traction scales directly with model quality, inference performance, and unit economics. As they grow, they do not build their products around a single closed-source model, but rather orchestrate multiple models in real time, often leveraging open source and a mixture-of-experts approaches to optimize both accuracy and unit economics. Our platform delivers flexibility at every layer, from serverless inference APIs to dedicated clusters and GPU Droplets, allowing customers to precisely match performance and cost to their workload requirements.
We pair that with performant open source models, delivering high accuracy, strong throughput, low latency, and compelling unit economics. And this is not a stand-alone inference platform. It is deeply integrated with our full-stack cloud that we have hardened over the last dozen years so that customers can build, deploy, and scale their entire AI application in one integrated environment with enterprise SLAs. Our agent development platform takes them from experimentation to production with real-world AI agents. Underpinning all of this is a deep lineup of GPUs from NVIDIA and AMD, supported by a rapidly expanding global data center footprint built and operated with years of operational expertise supporting mission-critical workloads.
This integrated platform and flexibility of choice is precisely what makes DigitalOcean Holdings, Inc. a natural platform for agentic software.
Let me explain this again using OpenClaw as an example. Customers can build and deploy OpenClaw agents on our solution in two distinct ways, depending on their need for control, scale, and operational complexity. The first path optimizes on simplicity and speed. Customers can launch a preconfigured one-click GPU Droplet and have an OpenClaw agent running in minutes. This model gives full control over the environment, ideal for experimentation, customization, performance tuning, and teams that want direct access to the infrastructure layer. The second path optimizes for global scale. Customers can deploy OpenClaw on DigitalOcean Holdings, Inc.'s managed serverless platform, where DigitalOcean Holdings, Inc. handles provisioning, scaling, security, container orchestration, and operational management.
This approach is ideal for teams that are scaling a global application. Both approaches run on the same integrated cloud with access to managed databases for agentic memory, object storage for artifacts, virtual private cloud networking, observability, and GPU-backed inference. That is what vertical integration looks like in the inference economy. Not just providing bare metal GPUs, or even just generating inference tokens, but providing a secure, scalable, and manageable foundation for intelligent, stateful systems.
Within days of launching OpenClaw, nearly 30,000 native DigitalOcean Holdings, Inc. one-click OpenClaw Droplets were created. And that was just the starting point. Thousands of other OpenClaw deployments were activated by customers, signaling the emergence of a new ecosystem almost overnight. The success of OpenClaw is an early view of how the AI market will continue to evolve and can serve as a blueprint for AI-native businesses on how a new generation of software will be built around autonomous agents that orchestrate complex multistep workflows across systems, continuously reason with data and context, and execute tasks end to end with minimal human involvement.
As these AI-native companies move from proof of concept to production agents, the richness of the underlying platform, the security posture, manageability, scalability, and predictable unit economics become mission critical. And that is exactly where DigitalOcean Holdings, Inc. is fast emerging as the natural platform for building and scaling AI agentic software.
The competitive landscape is crowded with companies speaking to their ability to address the inference market. But our differentiation from these competitors is very clear. Neo Clouds rent out GPUs. Inference wrapper providers stop at inference APIs and model libraries. We continue to effectively compete with hyperscalers who bring scale but also come with complexity and cost structures that are aimed at traditional large enterprise companies. While each of these competitors address a component of the inference value chain, real-world agentic software requires a tightly integrated environment where inference, orchestration, persistence, networking, and security are designed to work together with simplicity, global scale, enterprise SLAs, and predictable unit economics. That is where DigitalOcean Holdings, Inc. wins.
This differentiation is clear to our customers, but it is also very clear in our financial profile. As a full-stack cloud provider that has operated mission-critical workloads for cloud and AI natives for over a decade, we look very different from a financial perspective than other players in the AI training market or components of the inference market. Where Neo Clouds have very high revenue concentration with just a few very large customers making up the vast majority of their revenue, DigitalOcean Holdings, Inc.'s top 25 customers represent only 10% of our revenue.
While GPU rental providers earn bare metal revenue and margins on their infrastructure, DigitalOcean Holdings, Inc. drives higher revenue and margin from our full-stack inference and cloud solution. And when a growing number of Neo Clouds are investing massive amounts of capital and burning near-term profits and cash for future returns, our solution is already profitable and generating cash.
Our traction with cloud and AI natives is no accident. It is the result of relentless, focused investment and disciplined execution. We recently strengthened our executive team by adding Vinay Kumar as our Chief Product and Technology Officer. As a founding member of Oracle Cloud Infrastructure, or OCI, Vinay brings deep hyperscale expertise and leads our product, platform, infrastructure, and security teams. Having built a hyperscaler from the ground up at OCI, he looks forward to scaling up another one at DigitalOcean Holdings, Inc., one that is purpose-built to meet the complex needs of cloud and AI-native workloads globally.
In the meantime, our R&D team has been very busy continuing to ship products and features that are helping our customers scale on our platform. On Core Cloud, we launched remote MCP support, embedding AI directly into the control plane, enabling secure zero-setup infrastructure management. On our AI platform, we introduced the agent development kit and enhanced agent evaluation tools to help customers move from experimentation to production with measurable performance and reliability. With GPU observability, managed NFS, and multi-node GPU support, we significantly expanded our ability to run large-scale, mission-critical inference in production. This is what vertical integration looks like: infrastructure, inference, observability, agent tooling, all built to seamlessly work and scale together. And we are just getting started. We will share the next wave of innovation on our agentic inference cloud at our next Deploy conference in San Francisco on April 28, as we continue building the platform purpose-built for the inference economy.
Our differentiation is durable and will continue to grow as the market shifts from training to inference. To give investors clearer visibility into this momentum, we are introducing a new metric, AI customer revenue. AI customer revenue includes all revenue from customers leveraging our AI products, including both inference and core cloud services, because AI natives do not just buy GPUs. They build, operate, and scale applications which need a full-stack inference cloud. In fact, 70% of our AI customer ARR in Q4 2025 was already coming from inference services or general-purpose cloud products rather than from bare metal GPU rentals.
And these customers are growing rapidly, with Q4 AI customer ARR reaching $120 million, growing 150% year over year, now making up 12% of total ARR.
In summary, we do not just rent GPUs. We run production AI. We are not a GPU landlord. We are an AI cloud platform. We deliver hyperscaler-grade infrastructure and reliability, purpose-built inference services co-located and integrated with a full-stack general-purpose cloud designed for the next generation of AI natives. Or put simply, DigitalOcean Holdings, Inc. puts the cloud in Neo Cloud.
Now on to my final takeaway. We are building a durable and profitable growth engine. At our Investor Day last April, we laid out a plan to return the business to 18% to 20% growth by 2027. On our last earnings call, we pulled that growth projection forward by a full year, guiding that we would reach that 18% to 20% growth range in 2026. And just nine months after setting that original plan, we have already reached the bottom end of the target range at 18% growth in 2025, achieving it two full years ahead of our original target. And the momentum we are seeing gives us even greater confidence.
We now expect to deliver 21% revenue growth for the full year 2026, with an exit growth rate of 25% plus by Q4 and reaching 30% growth in 2027.
As we ramp into our committed 31 megawatts of incremental capacity this year, there will be measured near-term pressure on gross margin and adjusted EBITDA. But we remain confident in our 18% to 20% unlevered adjusted free cash flow margin guide for the year. The near-term pressure is just a physics problem, given the start-up cost timing and revenue ramp characteristics of quickly adding new capacity. It is the natural result of pursuing high-return growth opportunities. But we remain disciplined operators. Demand continues to far outstrip supply, and we will take advantage of opportunities to further accelerate growth when they present themselves.
We will do so responsibly, and we will continue to pursue investments with attractive returns, match investments with revenue timing, maintain a strong balance sheet, and allocate capital with rigor, even as we accelerate. Growth and discipline are not trade-offs for us. They are both operating principles. With that, I will turn it over to W. Matthew Steinfort to walk through the quarter and the year in more detail, and to provide additional color on our updated outlook. Matt, over to you.
W. Matthew Steinfort: Thanks, Padmanabhan. Morning, everyone, and thanks for joining us today. As Padmanabhan just shared, we are a very different company today than we were just a few years ago. It is an exciting time at DigitalOcean Holdings, Inc. We are a rapidly growing and profitable company that is incredibly well positioned to take advantage of the hyperscale-sized inference market opportunity. This excitement is clearly evident in both our recent financial performance and in our higher near-term and long-term outlooks. Revenue growth has reaccelerated. We have reversed declines from our top customers, turning them into a key driver of our growth. We have scaled our AI customer ARR to $120 million, growing at 150% year over year.
And we have done this profitably, growing adjusted EBITDA and adjusted free cash flow on both an absolute and a margin basis. While we are pleased with our progress over the past several years, it is our recent momentum that gives us the confidence to further increase our near-term and long-term outlooks.
Fourth quarter revenue was $242 million, up 18% year over year, and we closed 2025 with full-year revenue of $901 million. We delivered sustained acceleration through 2025, driving a 500 basis point increase in Q4 growth from the same period just a year ago. We delivered the accelerated revenue growth with strong margins and growing profits, even as we increased our investments. Fourth quarter gross profit was $102 million, up 13% year over year with a gross margin of 59%. For the full year, gross profit was $540 million, up 16% year over year with a gross margin of 60%. Adjusted EBITDA in the fourth quarter was $99 million, an adjusted EBITDA margin of 41%.
Full-year adjusted EBITDA was $375 million, a 42% adjusted EBITDA margin. Trailing twelve-month adjusted free cash flow was $168 million in Q4, or 19% of revenue. We maintained our attractive free cash flow margins in 2025, in part by expanding our financial toolkit to include equipment financing. This better aligns infrastructure investment timing with the revenue that it supports. We will continue to utilize a combination of upfront asset purchases and equipment leasing as we invest to fuel our growth.
We continue to be disciplined financial stewards for our investors. We prudently use stock-based compensation to attract and retain our critical talent while repurchasing shares to mitigate dilution. SBC declined to 9% of revenue in 2025, down from 12% in the prior year. To put that number in context, we have a 33% margin if you subtract SBC from adjusted EBITDA. At 33% margin, we are just above the 80th percentile of a broad software comp set on an adjusted EBITDA less SBC, and we are well above the 13% median of that group. Non-GAAP weighted average shares outstanding increased slightly from 103 million to 105 million over the same period.
To reduce dilution, we repurchased 2.4 million shares in 2025 for $82 million, an average price of approximately $35. Note that we ended 2025 with our full $100 million buyback authorization in place, and that authorization continues through 07/31/2027. While we continue to view share repurchases as an important long-term tool, our near-term capital allocation priorities are squarely focused on organic growth and balance sheet flexibility.
GAAP diluted net income per share in the quarter was $0.24 and $2.52 for the full year, a 183% year-over-year increase. Non-GAAP diluted net income per share in the quarter was $0.44. For the full year, non-GAAP diluted net income per share was $2.12, a 10% year-over-year increase. As a quick reminder, recall that our 2025 net income per share metrics were impacted by the actions we took in 2025 to strengthen our balance sheet. In 2025, we proactively addressed the upcoming maturity of our 2026 convertible notes. We did this through a series of successful financing transactions that have given us significant balance sheet flexibility.
These transactions included the establishment of an $800 million bank facility, issuance of $625 million of 2030 convertible notes, and the repurchase of the majority of our then outstanding 2026 convertible notes. Excluding the effects of these financing transactions, non-GAAP diluted net income per share would have been $2.29 for the year and $0.53 for the quarter.
With our 2026 notes largely addressed, we ended the year with a strong balance sheet. We have sufficient liquidity and projected cash generation to address the remaining $312 million balance of our outstanding 2026 convertible notes. Having drawn down the remaining $120 million on our Term Loan A in February, we will repurchase or redeem the remaining 2026 notes for cash before or at the maturity in December 2026. Beyond this, we have no other material maturity until 2030, and we entered 2026 with approximately 3.2 times net leverage.
Before I get into guidance, I want to highlight an action we are taking to further concentrate our investments on our key growth levers. We are sunsetting a small legacy dedicated bare metal CPU offering. We expect approximately $13 million of ARR to roll off by the end of 2026. As this revenue is non-core, we have excluded this legacy product revenue from our customer-specific year-over-year growth metrics.
Shifting back to guidance, we entered 2026 with tremendous momentum and confidence that is focused on material demand we are seeing for our agentic inference cloud. We also continue to improve visibility on our near-term revenue growth, as we increased RPO in Q4 to $134 million, up 121% sequentially and up close to 500% year over year. With this growing demand and visibility, we are again increasing our near-term growth outlook. For the first quarter of 2026, we expect revenue in the range of $249 to $250 million, which is approximately 18% to 19% year-over-year growth.
We expect first quarter adjusted EBITDA margins in the range of 36% to 37%, and expect non-GAAP diluted net income per share of $0.22 to $0.27 based on approximately 111 million to 112 million weighted average fully diluted shares outstanding.
For the full year 2026, we expect revenue growth between 19% to 23%. This is 21% at the midpoint, beyond the 18% to 20% growth outlook that we shared just last quarter, and it is important to highlight that this would be 21% to 24% projected growth if we exclude the impact of our discontinued legacy bare metal CPU offering. We will deliver this accelerated growth while maintaining attractive margins. We project full-year 36% to 38% adjusted EBITDA margins, and 18% to 20% unlevered adjusted free cash flow margins, which is $207 million at the midpoint. We expect non-GAAP diluted net income per share of $0.75 to $1.00 on 111 million to 112 million weighted average fully diluted shares outstanding.
This growth outlook is based on the incremental data center and GPU capacity investments that we have already committed that will come online over the course of 2026. As we look at the quarterly progression within 2026, it is important to understand the timing of this incremental capacity and how that timing impacts our financials. We are bringing 31 megawatts of new data center capacity online in three new facilities in 2026. The smallest of our three new facilities will start ramping revenue in the second quarter. The remaining two start ramping revenue in the second half of 2026.
Aligned with this capacity ramp, we expect second quarter revenue growth to remain around 18% to 19%, with revenue growth then ramping in Q3 before exiting the year at 25% plus in Q4. While there are always supply chain and implementation timing risks to manage, we believe our implementation timeline is realistic.
Increased data center lease expense and equipment depreciation expense will both hit our financials several months before we generate our first revenue in these facilities. Given this lag between expenses and revenue, cost of goods sold from higher GPU-related depreciation and operating expenses from new data center operating leases will increase in the early part of the year as we ramp into the new capacity. These increased costs will cause the expected upfront drops in gross margin and net income that we have seen when we turned up previous data centers. The initial impact will just be larger as we are turning up more capacity at one time than we have done in the past.
Near-term adjusted EBITDA margins will also be impacted somewhat from these dynamics, although the impact is less as adjusted EBITDA is only impacted by the higher data center operating lease expense. Net leverage is projected to be above four times in the short term as we add finance lease obligations to fund our GPU and CPU investments, and this increases net debt several months ahead of revenue and adjusted EBITDA ramp. We anticipate returning below four times net leverage over the medium to long term as we increase utilization in these data centers and ramp revenue and adjusted EBITDA.
We will achieve these growth targets by focusing on our two primary growth levers: scaling our top DNE customers and expanding our base of AI-native customers. We will focus our investments on meeting the needs of our top DNE customers so that they can continue to scale on DigitalOcean Holdings, Inc. as they grow their own business. We will continue to invest both in our differentiated agentic inference cloud and in the data center and GPU capacity required to support AI natives. While we are excited by our growth potential in 2026, we are just getting started. As we reach full utilization on our existing committed capacity, we expect to reach 30% revenue growth in 2027.
We will drive this growth while delivering projected 20% plus unlevered adjusted free cash flow margins, which would make us a Rule of 50 plus company in 2027. We will achieve this while making smart investments, earning attractive margins, and maintaining a healthy balance sheet. We have both the tools and the discipline in place to continue to take advantage of opportunities as they arise. We will continue to share details on our leading indicators and our progress as we execute. We are increasingly confident in our ability to build a durable and profitable growth engine. With that, I would like to turn it back over to Padmanabhan T. Srinivasan to close us out before we get to Q&A.
Padmanabhan T. Srinivasan: Thank you, Matt. Before we move to Q&A, let me leave you with a few thoughts. We crossed a $1 billion revenue run rate in December. But that milestone is not the headline. The headline is where we are heading. We are no longer a niche developer cloud. We are the platform that high-growth cloud and AI natives are increasingly choosing to run production AI workloads at scale. We are projecting to exit 2026 at 25% plus revenue growth with a clear path to 30% growth in 2027 with the existing committed data center capacity alone. Our top customers are accelerating and are growing significantly faster than the market on DigitalOcean Holdings, Inc. We have outgrown the old distillation narrative.
Scaling our top customers was once a constraint. Today, it is our growth engine. Our million-dollar customers are at $133 million ARR, growing at 123% year over year. The world of software is shifting from seats to tokens, from experimentation to production, from model training to inferencing at scale. And in that shift, the winners in inference will be more than just GPU landlords. They will be vertically integrated AI cloud platforms that deliver performance, great unit economics, and simplicity that embraces open source. Exactly what we have and what we continue to build.
Our AI customer ARR reached $120 million in Q4, growing 150% year over year, with 70% of that coming from inference and core cloud products, not from bare metal. And we are doing it without chasing the GPU training arms race, without sacrificing discipline, without compromising profitability. We are building something durable. AI is reshaping entire industries. And we are built for this shift. I am incredibly excited to be part of DigitalOcean Holdings, Inc. at this critical inflection point where a new era of software is being ushered in. I take incredible pride in building a platform that AI pioneers are increasingly leveraging to disrupt software.
I thank all of you for your partnership and support, and I hope you will join us in San Francisco on April 28 to learn about our platform, our innovation, and our customers. With that, let us open it up for your questions.
Operator: Thank you. Ladies and gentlemen, we will now begin the question-and-answer session. At this time, I would like to remind everyone, in order to ask a question, please press star followed by the number one on your telephone keypad. And if you would like to withdraw your questions, simply press star one again. We would like to ask everyone to limit themselves to one question and one follow-up only to accommodate all questions. Thank you. Our first question comes from the line of Raimo Lenschow with Barclays. Please go ahead.
Raimo Lenschow: Perfect. Thank you. Congrats from me. That is amazing how the company is transforming right in front of my eyes. And, Padmanabhan, can we just talk a little bit about the customers that you are seeing? The talk in the market, a lot of that is just OpenAI, Anthropic, maybe Google. And they are basically doing everything, nobody else really comes up. When you talk with your customers, looking at the pipeline of customers out there, how do you see that inference market evolving in terms of how broad that will be? Is it just one topic doing everything, or what are you seeing out there in the field? And then I had one follow-up for Matt.
Padmanabhan T. Srinivasan: Yes. Raimo, thank you for the question. It is a very thoughtful way to get started. Of course, OpenAI, Gemini, and Anthropic get all the headlines in the mainstream news coverage. But as we talk to AI-native companies, and even examples that I was using in my script, and you will hear a lot more about this at our Deploy conference with very specific benchmarks and data, but what we are hearing from these AI-native companies is that while these closed-source models are really, really good, the open source alternatives are extraordinarily important to manage the unit economics as these companies scale, because the cost per token for the open source models is about 90% cheaper.
So with very comparable accuracy as these open source models mature. So we have many AI-native customers that are using, as I mentioned, a variety of open source models at real time when they are doing inferencing. They want us to manage a multitude of open source models and even route the request intelligently to these open source models and, of course, use closed-source, expensive models on a case-by-case basis. It could be for certain prompts which are better served by these closed source models and route everything else to these open source models so that they can have a balanced unit economics.
So it is by no means, and if you look at data from OpenRouter, 30% of the traffic already today is served by open source. That is without a lot of optimization. That is without companies like DigitalOcean Holdings, Inc. really stepping up and taking full ownership and guardianship of these open source models. So we are doing a lot of work in this regard over the next couple of months, and you will see it in our Deploy conference. But this 30% is only going to grow. As these real-world AI-native workloads explode, we are going to see a lot of open source adoption.
Even in the OpenClaw deployments that we are seeing, there is a very healthy adoption of open source models serving these OpenClaw agentic agent farms. So it is really interesting to see how this is evolving. And I want to say there is definitely a world beyond these closed source models. The open source ecosystem is thriving, and it is only going to grow in strength from here on.
Raimo Lenschow: Yes. Okay. Perfect. And then thank you for that, Padmanabhan. And, Matt, one question that comes up a lot at the moment is on the weighted Rule of 50 numbers. If you look at your weighting, and then there is a lot of questions about the free cash flow margins that you think about in 2027. Can you maybe go a little bit deeper there? Because that comes up a lot here at the moment.
W. Matthew Steinfort: Yes. Thanks, Raimo. The weighted Rule of 50 is pretty simple for us. We multiply revenue growth by 1.5 and add 0.5 times the free cash flow margin. And that is effectively saying that you are counting revenue growth three times as valuable as a point of free cash flow margin. But the important thing to note is while we talk about weighted Rule of 50, if you look at the growth projections we provided, we are actually a regular weighted or a regular Rule of 50 as well, with projected 30% revenue growth in 2027 with 20% unlevered free cash flow margins.
So that is, I think, a very big testament to the growth opportunity that we have in front of us but also the disciplined financial discipline that we have been employing. With the ability to accelerate revenue growth while still maintaining very attractive EBITDA margins and very attractive free cash flow margins is kind of part of the model, and it is the benefit of us not chasing the GPU training kind of arms race. We believe that we will differentiate based on software and a differentiated platform, and we see a tremendous opportunity to drive really attractive margins as we expand and invest appropriately.
Operator: Your next question comes from the line of Kingsley Crane with Canaccord Genuity.
Kingsley Crane: Hi. Thanks for taking the question, and congrats to the whole team on results. I think you have done an excellent job with the investor update. I actually want to circle back to the inference cloud dynamic with open source models. We have been looking at OpenRouter data as well. Some of these models come and go pretty quickly. How many max can you cater to? How are you thinking about quickly providing support for those classes and models? Is there any operational tax to quickly provide support? And then just how to think about them driving growth both from a revenue and profit standpoint. Could there be more of a Jevons paradox dynamic there with the lower-cost models? Thanks.
Padmanabhan T. Srinivasan: Yes. Thank you, Kingsley. That is a good question. So you asked two different questions. One from an operational overhead in terms of day-zero support to these models. Obviously, we have been extending day-zero support for a big majority of these open source models as they come out. And there are a couple of things there. One is, obviously, there is a little bit of manual overhead in supporting these models. But a large portion of this test and readiness harness is automated. And it is only going to grow in automation, and you will see a lot more details around this at our Deploy conference.
And the second part of your question was really around the Jevons paradox of, as these open source models proliferate, how should we think about the growth profile of not just our platform, but also these companies. I think it is only going to aid in the deployment of AI-native software in pretty much every segment of the market. And I think we should also not think about AI-native workloads as open source or closed source. What we are seeing is a mixture of both. For the same use case, for the same inference call even, some parts of the application stack, based on the prompts, we do intelligence routing.
Right now, it is fairly manual, but we are working on different types of algorithms to route it in a much more intelligent and smart fashion. So you will see a universe going into the future where prompts are going to get routed to different models all working together at the same time to deliver high throughput, low latency, acceptable accuracy, with great unit economics of token throughput. So this is coming. We are already seeing it from many of our AI-native workloads. And that is how I see the market evolve as open source models continue to catch up with these closed-source systems.
The closed-source systems are really important to be on the bleeding edge of innovation, but a vast majority of these long-running, authentic software like OpenClaw can very materially run on these open source systems.
W. Matthew Steinfort: Thanks, Padmanabhan. That is really helpful. And then for Matt, you know, obviously, $22 million per ARR per megawatt is a clear differentiator. I am curious now that Atlanta is close to full utilization. Any insights you have on just what a fully utilized megawatt can look like in terms of a revenue efficiency standpoint for AI? Thanks.
W. Matthew Steinfort: Yes. That is a great question, Kingsley. Yes. If you look at the public data that is available for a Neo Cloud, which is more of a bare metal model, they show, like, $9 million to $12 million, I think, in ARR per megawatt. Clearly, we believe we can deliver more than that. If you look at the guidance that we have given, what you will see is that while it is $22 million now, that is, again, with a small, less than 10%, or right around 10% of our ARR in AI. So as we grow AI, it will come down. We will add incremental ARR per megawatt greater than what you are seeing from the Neo Clouds.
But the drop from a bigger mix of AI by the end of 2027 once we are fully ramped with the incremental 31, it will only drop by a couple million. It will be around $20 million. And so if you think of us as not having, okay, well, we have got AI investments, and we have got core cloud investments, but we have more of an overall AI cloud platform that has GPUs, it has got CPUs, it has got core compute and bandwidth and all the capabilities that you need, we still expect to deliver materially higher ARR per megawatt than what you are seeing in the Neo Cloud space.
So we feel really good about the returns that we are getting and the margin that we are able to drive. This is only going to increase. I mean, you saw the chart in the deck about how much of the AI customer revenue is coming from non–bare metal. That is 70%. That is only going to increase, and that smaller sliver of core cloud is only going to increase as customers become entrenched on our platform and they start putting in database and storage and some of the other higher-margin capabilities that are sticky. We are very excited about our ability to serve the full addressable wallet of the AI natives.
Operator: Our next question comes from the line of Josh Baer with Morgan Stanley.
Josh Baer: Congrats on the strong results and impressive targets. Just wanted to clarify, the incremental 31 megawatts, that all comes online by the end of 2026, driving that 25% revenue growth exiting the year, but then as utilization increases, the capacity is enough to reach the full 30% growth in 2027 revenue.
W. Matthew Steinfort: That is absolutely right, Josh. You nailed it. We said in the call that the smallest of the three facilities, which is six megawatts, is going to start ramping revenue in the second quarter. But the other two start ramping in the second half. And with what we believe is appropriate assumptions around the timing and the ramp of that, we will hit 25% in Q4 as an exit growth rate, 25% plus. And then if all we did was fill those up, we would hit 30% for the full year in 2027, and we feel very good about, again, the returns that we would generate there and the growth trajectory that we would be on at that point.
Josh Baer: Okay. That is helpful. And was just hoping you could sort of review some of what Vinay Kumar's top priorities are at this point. There have been so many positive changes from a product and innovation perspective over the last couple of years. What are his priorities? What changes should we expect going forward?
Padmanabhan T. Srinivasan: Yes. Thanks, Josh. So as I was mentioning in my prepared remarks, given his background at Oracle Cloud, he has really hit the ground running. His top one or two priorities are going to be continuing to build out the inference cloud. You will see a lot of very detailed updates on April 28 at our Deploy conference on how the next generation of this inference cloud capability is going to look. The team is super heads down and busy working on it now.
We also will continue to raise the bar on our core cloud capabilities because our cloud-native digital native enterprise companies are also scaling tremendously on our platform, and they require continuous innovation from our side on advanced things like different types of databases and different scalability aspects of our database-as-a-service and various parts of our core cloud infrastructure, like high-performance storage, network file systems. So one of the things that Vinay is working on is delivering innovation in our core infrastructure that is applicable to both AI native and cloud native. So there is a huge intersection that you look at.
Companies like the AI natives that we are rapidly scaling up on our platform require very similar things from, say, high-performance storage, as an example. I do not want to pre-announce stuff that we are working on which we will come out on April 28 with, but a lot of those things are very similar to what our cloud-native companies can also benefit from. So there is quite a robust lineup of capabilities that we are working on for both the inference cloud as well as some of the underlying infrastructure enhancements that will be applicable to digital native enterprise companies. So that is what he is focused on delivering.
As I mentioned, given his background, he has almost hit the ground running in terms of ramping up the innovation on the core inference cloud.
Operator: Our next question comes from the line of Wamsi Mohan with Bank of America. Please go ahead.
Wamsi Mohan: Yes. Thank you so much, and great to see this growth acceleration here. Firstly, maybe Padmanabhan, just visibility around the 30% growth. How should we think about that in terms of, I mean, historically, obviously, DigitalOcean Holdings, Inc. is a very different company today, but historically, you really did not have long-term contracts, long-term visibility. You are talking about very meaningful acceleration as you go to 30% plus. Maybe if you could dissect some of the underlying drivers of what you are looking at, which give you the confidence, and maybe just split that between Infrastructure-as-a-Service and Platform-as-a-Service. That would be maybe a different way to slice and give people a view over there. And I have a quick follow-up.
Padmanabhan T. Srinivasan: Thank you, Wamsi. I think Matt broke down some of the physics of the acceleration. We have new capacity that is ramping up throughout this year and going into next year as well. So that gives us a lot of visibility. Maybe I should take a step back and talk about the fact that the demand that we are seeing now is very, very robust, and it far exceeds the supply that we currently have from an infrastructure point of view. So we are being super responsible in ramping up our capacity. We are being super aggressive in the timelines.
We are working very closely together with the data center providers and the OEMs to get this capacity online in the fastest possible speed-of-flight scenario as much as we can. So given the schedule that we are currently working on, we feel very confident that as we bring this capacity online, we have enough demand in the pipeline to be able to fill up this capacity with very responsible economics. So that is what is giving us the confidence to provide the outlook of 25% plus exiting this year and 30% for next year. And also, our RPO has been going up steadily, and that is one leading indicator.
But also, I should add the fact that inferencing is very different. These are real-world workloads. As opposed to training where a company can just raise venture capital money and just commit to a two-year, three-year contract to burn dollars to build a frontier model, inferencing workloads are typically paid by end customers. So for us, that is super exciting because we are typically working with post–product-market fit companies that have real revenue, working with real consumers or business-to-business like Hippocratic AI. They are deploying in some of the world's largest healthcare providers. So we know that as their demand picks up, they are going to need more and more inference capabilities.
So our confidence really stems from the visibility we are getting into our customers and the real-world inference demand. So I feel if you look at it from a customer perspective or you look at it from capacity point of view, those are the data points that we use to triangulate our guidance for exiting this year and next year.
Wamsi Mohan: Okay. Thanks, Padmanabhan. And then maybe one quick one for Matt. Can you just talk a little bit about the margin progression? I guess you mentioned some near-term margin compression given your capacity ramp. Should we expect that will persist through all of 2026 given the timing of the ramp, and then as you ramp into 2027, we should be back to 2025 levels? Thanks so much.
W. Matthew Steinfort: Thanks, Wamsi. Yes. There is certainly going to be some near-term pressure, as we said, on gross margin, for example. But the metrics that we think are the best indicators of profitability for us continue to be adjusted EBITDA margin and free cash flow margin, both on an unlevered basis and a levered basis.
And if you look at the margin guidance that we provided for the full year 2026 and the ranges for 2027, you see exactly what you just described, which is we will have a little bit more pressure this year as we ramp, but then as we grow into that and the utilization increases, that catches back up, and then you should see an upward trajectory on the margins.
The mix of AI services versus the core cloud, that is a longer-duration impact because as we add more AI capabilities and more AI revenue, the AI margins are lower than the core cloud margins, so you will have a little bit of a mix impact in addition to the timing impact. But all of that is netted out in the very, very strong adjusted EBITDA margins that we are projecting and the very strong adjusted free cash flow margins and unlevered adjusted free cash flow margin.
Operator: Our next question comes from the line of Gabriela Borges with Goldman Sachs. Please go ahead.
Gabriela Borges: Hey. Good morning. Congratulations to the DigitalOcean Holdings, Inc. team. Padmanabhan, I have a little bit of a long-term question for you. If I think about DigitalOcean Holdings, Inc.'s core value proposition, democratizing access to cloud, that has been true for many years now. My question for you is, what do you think is structurally different with the AI compute cycle that will allow DigitalOcean Holdings, Inc. to essentially capture and hold on to a higher share of wallet in AI inference compute relative to the cloud cycle? And the reason I am asking is because there are 32 companies that show up in this SemiAnalysis cost-to-max benchmarking report. We know that the month is early.
We know that the AI inference cycle is early. How do you think about DigitalOcean Holdings, Inc.'s ability to durably capture share of wallet relative to the 31 of the competitors over the long term? Thank you.
Padmanabhan T. Srinivasan: Thank you, Gabriela. And I am sure if SemiAnalysis was around in 2011 or 2012 and cloud was taking off, there would have been 32 IaaS providers as well. And we went from that to a $1 billion run rate in twelve, thirteen years. And if I take a step back and think about how durable our mission is in the world of AI, I think I hit on a few different points. I fundamentally believe that inference workloads are also real workloads or real-world applications as well. As the application scales, you need a variety of different things all working together.
AI natives do not want to just use one provider for token generation, go to another provider for database, go to a third provider for their application experience, and go to a fourth provider for some of the other core storage and other artifacts. They want an integrated cloud that is co-located, and all of these primitives to work hand in hand together so that they can focus on building their business and not mess around with infrastructure.
The other part that I feel very confident about is something that we are going to be talking and dealing with a lot in our Deploy conference on April 28, which is this emergence of a mixture of AI models that is required to run efficient unit economics in inferencing mode. So the difference in the unit economics between closed-source models and open source models is 90%—open source models are 90% more cost effective compared to closed-source models—and it already has 30% market share with just a handful of open source models on the market. So I feel this is only going to go from strength to strength.
And that has been a big differentiator for DigitalOcean Holdings, Inc. throughout the years as well. So we talk about 32 companies showing up in some of these market landscapes. But when OpenClaw became viral a couple of weeks ago or a month ago, we were one of the natural places where developers started deploying it. As I said, we have more than 30,000 of these agents running and we barely did anything from a marketing point of view. In fact, we did no marketing.
All we did is scramble our jets to make sure that developers have first-class experience deploying these agents on our platform, and we were such a natural choice for running these long-running agentic software because they need a lot more than just access to GPUs or just access to inference tokens. So I feel very good that our 70% of our revenue already is from non–bare metal. And that should give us a lot of confidence that our platform services, higher-margin services, are resonating with our customers. They are increasingly coming to us as they recognize that bare metal is not going to be sufficient for them.
Gabriela Borges: Yes. Really good color. Thank you. I will stay on this 70% non–bare metal data point, and I will ask the question to Matt. Payback period on GPUs. The last time we talked about this, I think you told us it was around three years, but that was before you all had focused on maximizing or improving the ARR per megawatt of capacity. So my question for you is how are payback periods on GPUs changing? Thank you.
W. Matthew Steinfort: Well, that is a great question, Gabriela. And one of the things that I want to make sure everybody understands is, if you think about why did we lease gear, like, why are we doing equipment leasing, it is to address exactly this challenge. If you said, okay, well, you are going to spend hundreds of millions of dollars on GPUs, and you are going to have to wait three, four years to pay them back, that is a model. That is not the model that we are pursuing.
Our model is leasing the gear, which means we are earning more ARR per megawatt for the associated GPU investment than what a Neo Cloud would earn, but we are also earning cash on that within months of actually deploying. As soon as we deploy that and we start earning revenue and it ramps, we are paying on a monthly basis for that gear over four or five years, and we are earning more than two times that in revenue. So from a payback period, we still have the same kind of payback hurdles that we have had before. You would like to see three-year paybacks on most of your investments.
You might be willing to extend that to win some early customers. But if you actually think about the mechanics, that is a little bit of an intellectual exercise because we are paying our gear back within a month or two because we are earning more cash than we have spent on that gear. And that is the reason you align your investment with revenue.
Operator: Our next question comes from the line of Radi Sultan with UBS. Please go ahead.
Radi Sultan: Yes. Awesome. Thanks for taking the questions. One for Padmanabhan, kind of on a similar line of questioning, just sort of that longer-term, the next several years.
Padmanabhan T. Srinivasan: Yes. Thank you. We look at many, many factors, but the dominant one is we look at our customer demand, look at what they are dealing with, how they are projecting their needs. So that is a big, big input factor for us. The second one is we look at the footprint from the perspective of, for inferencing, obviously need to have a really good geographic spread, co-locating. And for all of our new data centers, we have both core cloud as well as AI capacity all running on the same server stack. So that is an important aspect for us to have all of these things co-located.
The third thing we always look at is how we are going to keep up with the generational leapfrogs of OEMs, including AMD and NVIDIA and perhaps others in the future. So these are all important factors that we take into account as we consider how our footprint is going to look over the next several years. And we are always making this evaluation. We are looking at various options as we build out our long-term plan. And as I said, the primary driver is always looking at our customer needs, customer demands, what kind of workloads are they ramping up. The demands for their application is a big driver for us.
So those are some of the input factors that we use to plan our capacity.
Radi Sultan: Got it. Just a quick follow-up for Matt. Does the 2027 EBITDA margin and free cash flow guidance contemplate any additional capacity investments next year? Or is that just reflective of the 31 megawatts you are bringing online this year?
W. Matthew Steinfort: It is just reflective of the 31 megawatts that we are bringing on this year.
Operator: Our next question comes from the line of James Edward Fish with Piper Sandler.
James Edward Fish: Hey, guys. Maybe just following up on that. If AI is growing as fast as it is, and you guys are needing to bring on capacity now to meet all this demand, are you not going to need more capacity then? And, Matt, additionally, it looks like you are excluding finance leases in the free cash flow metric. Why treating it like this? As if it was not financed, you would still have CapEx. It does seem to imply, I am getting a lot of this question pre-market here, it seems like you are implying about 10% reported free cash flow in 2027. So can you walk us through that?
And I know this is a loaded question, but a lot of those that are providing lead servers are implementing memory cost increases. So I guess, how are you thinking about what commitments you actually have from them and potential pass-through of memory costs?
W. Matthew Steinfort: Yes. I will take that in reverse. We have seen increased component costs the same as others in the industry, and that is all reflected in our guidance. And, again, it has not changed our return expectations or the economics that we would see. It just means that there is more cost associated with some of the servers that we are bringing on. But I am glad that you brought this up, which is you have got to think about our free cash flow in tiers. So you say, okay, well, you have unlevered free cash flow, which, again, you should be using from a valuation standpoint, and that we are talking about being in the 18% to 20% range.
When you add the interest expense, you get the levered free cash flow, which is what we have historically, that is our adjusted free cash flow margin, and you are only giving up a couple of percentage points there. And that interest right now is half the TLA, and it is half equipment leasing. And then, as you point out, you have the principal payments that are more of a financing transaction. That is why they are not captured in either the adjusted free cash flow or unlevered free cash flow.
But if you take those financing transactions, and if you are going to lump everything in and you say, okay, well, what about the mandatory prepayment of $25 million a year on your term loan? Okay. We will throw that in there. If you take all of the cash payments, including the principal payments, including the prepayment of the Term Loan A, that is all financing stuff. So, again, you are mixing metaphors here. If you throw that all in, we are still generating cash. So you are saying, hey, well, it is 10. I am like, hey, it is 10. It is like we are generating cash while we are accelerating the growth of this business into the thirties.
And on an unlevered free cash flow basis, it is 18% to 20%. So it is a testament to our ability to dramatically accelerate growth. We have taken growth from 11%, 12%, 13% to guiding to 30%, and we are generating incredibly strong unlevered free cash flow. We are generating very strong levered free cash flow. If you throw the kitchen sink in there and all the payments that we have to make, we are still generating cash. I mean, that is an incredibly strong position to be in, and we have a very flexible balance sheet. So we feel very good about the cash generation that we are setting out while we are delivering this growth.
James Edward Fish: Yes. I mean, the growth accelerates and is good. And, Padmanabhan, for you, on slide 20, I got asked a couple questions ago to a degree, but by slide 20, you point out that difference between you guys and Neo Cloud and inference wrappers, and maybe being humble about it, you point out that you are about 75% of the way in the first three categories. And so is this something that we should be expecting to hear about at the April event, or what do you guys need to do to get to that full 100% difference?
Padmanabhan T. Srinivasan: Yes. James, I do not know if I will ever call myself 100% in those things because that market is changing so fast. If we ask five of our customers today what they want versus what they thought they wanted three months ago, it is meaningfully different. Because as they are growing their customer base and deploying their solutions, new things come up all the time. The capability of AI models evolves all the time. This is going to be a moving target for the next couple of years. But the first part of your question, absolutely, that is where our R&D team is super heads down, inventing new parts of the stack.
So you will hear a lot more about this on April 28. But I would say this is where I feel very confident that we already have a lead, and that lead is only going to grow over the next few quarters.
Operator: Thanks, guys. Next question comes from the line of Thomas Blakey with Cantor Fitzgerald.
Thomas Blakey: Guys, congratulations on a great quarter and a great outlook here. Maybe some follow-ups to my peers. Padmanabhan, you mentioned, I think it was to a previous question about demand outstripping supply and giving you great visibility that you have alluded to in this call. Not expecting you to give calendar 2028 commentary; if you wanted to because you looked out two years on the April 25 call, that would be great. But in addition to that, I am interested in what you are seeing in a pricing dynamic. If demand is outstripping supply, you are lining up these new AI natives. Just maybe some commentary on pricing would be helpful from this cohort.
Padmanabhan T. Srinivasan: Yes. Thanks, Tom. So I think we have already talked about what we are going to talk about for 2027. But in terms of the demand, demand is clearly there, and we are moving as fast as we can to first deliver on these three data centers that Matt talked about. From a pricing point of view, we have competition from all kinds of different players. And the pricing is holding, and in some cases, it has gone up. And we are very attuned to what is going on in the market. And there is a lot of scarcity of supply across the board.
So we are also in a position where we work very closely with our customers to ensure that we are calibrating the price that we have, both on-demand as well as contractual prices, to keep pace with what the market dynamics are at this point. But I would say nothing has materially changed. And the pricing is also a function of the generation of the GPUs that we are talking about. At the lowest level, if a customer wants access to GPUs, it is priced GPU dollars per hour.
And at that layer, it really depends on the generation of the GPU, whether it is Blackwell or the Hopper series from NVIDIA, or the 350, 355s from AMD, or the 300 or 325. So it really depends on the nature of the generation. There are also other dependencies like the cluster sizes, the cluster configuration, what kind of networking they want, and so forth. And as you move up stack, if you look at my slide 19, and the one thing that I did not mention in slide 19 is that customers can enter our stack at pretty much any layer of the stack.
The higher up you go in the stack, you are not pricing by dollars per GPU hour, but you are pricing per token. And there we have a lot more degrees of freedom in terms of how we price versus competition, because there, you are doing dollars per token, but also you have the choice of running it in different types of hardware. You can also change up the AI model that is servicing this token request. So we have more degrees of freedom. Some customers need that flexibility, and they are willing to live with the higher orders of the stack rather than dictating which generation of hardware they want to run in.
Thomas Blakey: That is super helpful, Padmanabhan. And just maybe an extension of that flexibility, it was impressive to hear about the zero churn in the large $1 million plus cohort, 115% NRR. I would love to know what the overlap there is with regards to the AI-native exposure. If you could maybe talk about just those customers and how much of that is from AI. And for Matt, relatedly, in your improving NDR, are we finally including AI and ML revenue there, and if not, when can we expect that? Thank you, guys.
W. Matthew Steinfort: Yes. Thanks, Tom. So on a customer count basis, it is about half of the million-dollar customers are AI customers and half are core cloud or general-purpose cloud only. It is a little bit more on a revenue basis or an ARR basis, a little bit more AI, but not a lot. It is not too far off of 50/50. And as you saw in the materials, 48% of the trailing twelve-month incremental ARR is coming from AI customers. So that is how the split is. In terms of the NDR, no, it is not in there yet.
And the reason that we disclosed the AI customer revenue—and we will continue to disclose that as a metric and the growth rate—and also looking at the RPO, which is, again, a decent chunk of that, not all, but a decent chunk of that is also AI, is to try to give you better leading indicators of the performance of the AI customer base. The NDR, if you look at some of the charts that we showed with some of the bigger inferencing providers, they just got started on the platform in kind of the June, July time frame.
And there is a big difference in the size and caliber of the customers that we have been winning in the last six to eight months on the AI side. Those, we think, will have more of your traditional kind of NDR-like characteristics where they grow and expand on the platform using inferencing, which is more of a production workload, versus a lot of our earlier customers were smaller customers doing experimentation, doing projects, and they just do not look like NDR. Revenue was growing like crazy because we would be adding a ton of those customers, but if you looked at any of the individual customers, it was hard to see a pattern.
And what NDR is as a SaaS metric is it looks for patterns where you bring on a customer and you can expect them to do X, Y, Z over the next twelve months. And we just did not see that. And there is a lot of noise in our AI customer revenue, kind of lumpiness early, that we see changing. So we will continue to evaluate that every quarter, and at the appropriate time, we will contemplate rolling that in. But it is probably still twelve months away.
Operator: The next question comes from the line of Patrick Walravens from Citi. Please go ahead.
Patrick Walravens: Oh, great. Thank you. Congratulations on the quarter. And I have to say congratulations on the slide deck. It is fantastic, and I am sure all of your investors are going to appreciate it. So, Padmanabhan, I was looking back at my note from two years ago when you joined. And at the time, one of the things you said was that a durable competitive differentiator for us long term is going to be in the software layer. And you said you were focused on bringing simple, easy-to-use AI/ML capabilities on both hardware and software to developers.
So what I am wondering is, as you look back—and you were growing 11% when you joined and decelerating—so as you look back, which of the growth drivers that have caused you to accelerate—you know, now we are talking about 30%—did you anticipate, and which were, fortuitous is probably the wrong word, luck favors the prepared, but which were sort of unexpected?
Padmanabhan T. Srinivasan: Yes. Thank you, Patrick. I would say what was surprising to me, and maybe I will take some creative liberty in answering your question. What took a few quarters for us to get right was, as I mentioned several times during this call, we had a constraint in keeping up with customers that were scaling rapidly and scaling big on our platform when I joined. So it took us a few quarters to really understand, get to the bottom of their needs, and there was a lot of work that had to be done for us to get to the 0% churn that I was so proud to share with all of you this morning.
So that took a lot of engineering effort. And I am super proud of my team. And it is a lot of very complex technology work all the way from advanced networking to fortifying our storage to inventing new things in our database offering, and so forth. So that took a tremendous amount of heavy lifting, and that job is not done yet. As we get to—we started with $100,000, then we focused on $500,000 customers. Now we are focused on million-dollar customers, and who knows? In the next couple of years, we will be talking about $5 million and $10 million customers. So that bar raising is an ongoing endeavor for us.
And on the more fun side of things is literally participating from the starting point with the AI-native ecosystem. So we are learning as they are learning, and we are inventing alongside them. And that is a great luxury to have because we feel like we can ride their growth curve, and as their needs increase and they are learning the right way to do this from a workload perspective, we are just trying to keep up pace, and they are super appreciative of us inventing on their behalf to make their life easier so that they can focus on their domain and invent new things for their customers.
So we will share a lot more of this on April 28, but that is how I would answer your question, Patrick.
Operator: Next question comes from the line of Mike Cikos with Needham and Company. Please go ahead.
Mike Cikos: Hey. Thanks for taking the questions here, guys, and congrats on the strong growth guardrails you are providing us. Matt, if I could just come back, and I know that the free cash flow topic has come up a couple of times here, but you can see as well as anybody just how sensitive the investors are in this market to the AI CapEx investments that are required or different financing vehicles that are out there. Just to be clear, when we look at the calendar 2026 versus the calendar 2027 guide, that unlevered adjusted free cash flow or adjusted free cash flow guide, the three-point delta is expected to widen to about 10 points in calendar 2027.
If we take that one step further, and I know that your guidance for those guardrails for 2027 currently do not contemplate additional capacity coming online, but it seems fair that we should be assuming more capacity. And if that is the case, would that delta between the unlevered and the levered free cash flow margin widen further from there? Is that fair?
W. Matthew Steinfort: The way I think you have got to think about it is, again, if you are looking at the levered free cash flow, it has got other stuff in it besides equipment leases. It has got TLA interest. It has got other things. If you look at, as James was saying, if you look at the other cash, there is mandatory prepayments of the Term Loan A. So you have got to be real careful about what you are using for what. If you said, hey, what is the steady-state cash flow generation capability of this business? Again, because we lease equipment, we do not have an upfront capital requirement that makes it super lumpy.
We can make that smoother and we can grow. However, when you are growing a business, even with that model, and you are adding data center capacity, you have a couple of months where you are actually taking data center lease expense and you have not generated any revenue. When you lease gear, unlike if you buy gear, you put it in your warehouse, you actually do not expense it until you actually deploy it. When you lease gear, you start that lease expense as soon as it ships. So you have front-loaded costs that do not catch up to the revenue right away.
But because you did not have a big, giant slug of capital, as soon as the revenue starts generating, you are immediately generating cash, and you are improving your margins with utilization. So the steady state—if you said that is why we have been very crisp about what is included in the numbers—it is to give you a sense of what the margins look like on a steady-state basis. If we just continually assume, well, we are going to add incremental capacity, which I cannot tell you how much incremental capacity we are going to add because we have not contracted it, and we have not committed anything to incremental capacity.
So what we are showing is when we add 31 megawatts, as an example, you roll that forward a year, you have incredibly strong cash flow characteristics to that. And there is going to be a short-term impact on gross margins and net income because of the timing thing I described. But that works itself through relatively quickly. And so you would expect that as we saw other opportunities to accelerate our business with similar economics, that we would make similarly good decisions, and that engine will keep going. And so I view it in a very different way than what you are describing.
I view it as, hey, if we are going to commit to more capacity, it is because we have more growth opportunities. And the returns are incredibly compelling. And we are doing it in a way we match the revenue and the cost, and we are not going out beyond our skis and making massive commitments chasing the data center and GPU arms race. We are doing it methodically. We are doing it where we have an advantage, where we earn a good return, and we are able to do it while, again, taking 11%–13% revenue growth to 30%, while still maintaining really good margins.
So we are really excited about the potential we have and the economics that we are delivering.
Mike Cikos: Thanks for that, Matt. Maybe for a quick follow-up here. Understood on the accelerating growth you guys are looking at throughout calendar 2026 just based on the megawatts coming online. One thing I wanted to ask, and I am sure that you guys have your own models as you are looking at the AI customers ramping, but to drive that 25%ish growth exiting calendar 2026, can you provide any additional color for what you are assuming in terms of ARR directly from those AI customers? If I am thinking about the $120 million that we see today exiting calendar 2025.
W. Matthew Steinfort: The only thing I would say is what we said is that the AI customer ARR in Q4 was $120 million, growing 150%. We have more demand than we have supply. We are bringing on supply. You should expect that it does not slow down.
Operator: Our next question comes from the line of Mark Zhang with Citi.
Mark Zhang: Hey. Great. Thanks for taking my question. Just given the strong demand environment, should we not see more capacity commits coming, I guess, like, announced today? And if that is not the case, then is there enough incremental capacity or megawatt capacity in your current footprint to support continued growth? Just any insights there will be appreciated. Thanks.
W. Matthew Steinfort: Sure. So, Mark, as we said, there is enough growth potential in the committed capacity to get us to 30% growth in 2027. Clearly, we are very cognizant of the data center market and very active in terms of the evaluation of that. We have not made any commitments at this juncture to share with the market. And if we get to a point where we make a commitment, we will certainly share that. But at this point, again, we thought it was incredibly important for people to understand how to digest capacity as we bring it on, and that is why we have guided to what we have, based solely on the 31 megawatts we have already committed.
And it gives you a good sense of how it ramps and what the economics are. And should we bring on incremental capacity, you will have a good model to add on to the growth ramps that we have already articulated.
Mark Zhang: Okay. Great. And then maybe related to that, can we get a sense of utilization of your current estate? Maybe give a sense of the current capacity—or we know the current capacity—maybe any sense of the contracted capacity that you have on the books? Thanks.
W. Matthew Steinfort: Yes. So from a contracted capacity standpoint, again, if you are talking about data centers, we have got 31 megawatts that we are adding to our roughly, call it, 43 or 44, which will put us at just about 75 megawatts when we are done. So we are sitting at, call it, 43, and we are adding six that will come online, start generating revenue in the second quarter. And the balance of the incremental 31, which is about 25, will come on and start ramping revenue in the second half.
And we expect to reach whether we are at full utilization is a function of whether we decide to fill them all with GPUs right away or we do it over time, because we like to stripe out the generations of GPUs. We do not like to go all in on one type of generation of GPUs, but we will be at a very healthy utilization at some point in 2027, which is enabling us to get to that 30% growth.
Operator: Thank you. At this time, we have no further questions. That concludes our Q&A session and today's conference call. We would like to thank you for your participation. You may now disconnect your lines. Have a pleasant day.
Before you buy stock in DigitalOcean, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and DigitalOcean wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you’d have $409,970!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you’d have $1,174,241!*
Now, it’s worth noting Stock Advisor’s total average return is 889% — a market-crushing outperformance compared to 192% for the S&P 500. Don't miss the latest top 10 list, available with Stock Advisor, and join an investing community built by individual investors for individual investors.
See the 10 stocks »
*Stock Advisor returns as of February 24, 2026.
This article is a transcript of this conference call produced for The Motley Fool. While we strive for our Foolish Best, there may be errors, omissions, or inaccuracies in this transcript. Parts of this article were created using Large Language Models (LLMs) based on The Motley Fool's insights and investing approach. It has been reviewed by our AI quality control systems. Since LLMs cannot (currently) own stocks, it has no positions in any of the stocks mentioned. As with all our articles, The Motley Fool does not assume any responsibility for your use of this content, and we strongly encourage you to do your own research, including listening to the call yourself and reading the company's SEC filings. Please see our Terms and Conditions for additional details, including our Obligatory Capitalized Disclaimers of Liability.
The Motley Fool has positions in and recommends DigitalOcean. The Motley Fool has a disclosure policy.