Image source: The Motley Fool.
Thursday, May 7, 2026 at 4:30 p.m. ET
Need a quote from a Motley Fool analyst? Email pr@fool.com
The restructuring and divestiture of the biosecurity business led to streamlined reporting, with all metrics and commentary now reflecting only continuing operations. Ginkgo Bioworks (NYSE:DNA) management is repositioning the company around full-scale autonomous lab capability, deploying Nebula with over 100 interconnected robotic racks and demonstrating sizable protocol and scientist adoption. The company is leveraging partnerships with OpenAI, AWS, Benchling, and Tamarind Bio to create new sales channels for laboratory services, while federal policy support and a new $47 million national lab contract signal institutional uptake. Cost control remains a core priority, with substantial reductions in R&D, G&A, and cash burn reported, and cash reserves of $373 million cited as sufficient for planned AI and automation investments.
Jason Kelly: Thanks, Daniel. We always start with this. Ginkgo's mission is to make biology easier to engineer. And I mentioned this at the last earnings call, but in 2026, our focus will be on investing to win the category of autonomous labs. And I'm really excited, even since we just spoke a few months ago, this category has really been growing in attention, new companies in Silicon Valley pursuing this, a lot of interest from the AI Frontier labs about the application of AI models in science via autonomous labs. Government talking more about this. So I do think we're on to the right track with this focus for the company.
The 2 big ways I'm going to be pursuing that goal in 2026, the first is to take our services in solutions, in data points and cloud lab and run them on top of our autonomous lab here in Boston that we call Nebula. That's a chance to prove out the capabilities of our system with real-world activities. And then the second big area of activity will be getting early adopters of autonomous labs out in the world to buy our systems like we've done already with Pacific Northwest National Labs that I talked about last time. So excited to pursue both of those, and you're going to hear more about it from me in the section.
We also -- in the last quarter, we were able to close on a deal I talked about extensively last time, which is the spin-off of our biosecurity unit into a new company called Perimeter. I want to say congratulations to the team at biosecurity at Ginkgo and pulling that off, $60 million and a lot of great new investors coming into that focus really firmly in the area of defense tech and building sort of a biosecurity prime. Ginkgo is a shareholder in that company. We're super excited to see it succeed.
And I think this is really nice, as I talked about last time, opportunity, both for Ginkgo to keep our focus on the autonomous labs and for the team at Perimeter to grow under their own brand with a new set of defense tech-focused investors. Our focus over the last couple of years was very much on getting these numbers where they are today, bringing down our cash burn in the company. We guided towards this, and Steve will touch on that in his section. But again, happy to have a very strong cash position, $373 million with no bank debt as of Q1 2026. And so you'll hear a little bit more from Steve on this.
But this sets us up very nicely. We're well capitalized to pursue this area of autonomous labs. We have these base service businesses to build on top of and the lead in developing the technology and you put all that together. And I think we're by far the best bet in this sector. All right. I'm going to pass it on to Steve to dig into the financials.
Steven Coen: Thanks, Jason. Before I walk through our financials, I want to take a moment to frame an important change in how we are presenting our results beginning in Q1 2026. As we announced in February, we entered into a definitive agreement to sell our biosecurity business, which was previously reported as a separate segment. Further, as Jason noted, we closed that transaction on April 3. The biosecurity transferred assets met the criteria under U.S. accounting to be classified as held for sale and the financial results reported as discontinued operations as of March 31, 2026. This is the first quarter in which biosecurity is reflected as discontinued operations within our financial statements.
And to close with the accounting rules, we have and will retrospectively recast all prior periods presented to conform to this presentation. That means the revenue, operating expenses and cash flows previously attributed to the biosecurity business are removed from each line item of our continuing operations and cash flows as the prior period information is presented, including for Q1 of last year. The former biosecurity results are now reported as a single net line loss from discontinued operations, below loss from continuing operations. To be clear, all of the financial commentary I will provide today relates exclusively to continuing operations. We will not be discussing the biosecurity business further in our prepared remarks.
On April 7, 2026, for your information, we filed a current report on Form 8-K that includes pro forma financial information for fiscal year's 2023, 2024 and 2025 on a continuing operations basis. Following the biosecurity divestiture, we now operate as a single segment. So with that, I'll now discuss our Q1 results. Revenue was $19 million in the first quarter of 2026, down 49% compared to the first quarter of 2025. As previously disclosed, revenue in the first quarter of 2025 included $7.5 million in noncash revenue relating to the mutual termination of the BiomEdit agreement. Excluding this, revenue in the first quarter of 2026 was down 37% from the prior year period.
It is important to note that our net loss includes a number of noncash and other nonrecurring items as detailed more fully in our financial statements. Because of these noncash and other nonrecurring items, we believe adjusted EBITDA is a more indicative measure of our profitability. A full reconciliation between adjusted EBITDA and GAAP net loss from continuing operations can be found in the appendix. In the first quarter of 2026, R&D expense decreased 38% from $49 million in the first quarter of 2025 to $30 million in the first quarter of 2026. G&A expense decreased 35% from $20 million in the first quarter of 2025 to $13 million in the first quarter of 2026.
These decreases were all driven by our restructuring efforts. Net loss from continuing operations was $76 million in the first quarter of 2026 compared to a loss of $83 million in the prior year period. The reduction in loss year-over-year was due to our restructuring efforts. Moving further down the page, you'll note that adjusted EBITDA in the first quarter of 2026 was negative $42 million, which was down from negative $44 million in the first quarter of 2025.
Since we are now only operating in a single segment, we only present a single measure of adjusted EBITDA, and it is important to note that adjusted EBITDA includes the carrying cost of excess lease space, which you can see was $16 million in the first quarter of 2026. Previously, this cost would not have been included in the former presentation of segment adjusted EBITDA. This cost represents the base rent and other charges relating to lease space which we are not occupying net of sublease income. This is a cash operating cost that is not related to driving revenue right now and can be potentially mitigated through subleasing.
And finally, cash burn in the first quarter of 2026 was $48 million, down from $58 million in the first quarter of 2025, a 17% decrease. As previously reported, in October 2025, we amended and reset the annual commitments with Google Cloud for $14 million. Resetting the commitment reduced our future minimum commitments by more than $100 million compared with the original terms and extended the commitment term from 3 to 6 years. We paid this $14 million in Q1 of 2026, which is reflected in our cash burn for the quarter.
Excluding the payment to Google Cloud, cash burn reflects a significant decrease in the first quarter of 2026 compared to the first quarter of 2025, which was a direct result of the restructuring. Now turning to guidance. As we discussed in February, 2026 is about continuing to be cost efficient, while investing in our AI robotics and software to bring autonomous labs to our bioscience customers, including the build-out of our Frontier Autonomous Lab in Boston.
We have turned the page on our pure focus on restructuring actions to focus this year not only on cost efficiency, but on investing in what we see as our opportunities while continuing to provide our customers the advanced services that they have come to extend. For these reasons, we believe cash burn best reflects our continuing services and tools and further investments in autonomous labs. In terms of outlook for the full year, we are reaffirming our overall cash burn guidance for 2026, totaling $125 million to $150 million. This range reflects a firm balance amongst cost efficiency, continuing services and tools and further investments we are making.
In conclusion, we are pleased with our continued improvements in cash burn efficiency and our business pursuits for 2026. And with that, I'll hand it back over to you, Jason.
Jason Kelly: Thanks, Steve. So I'm going to dive in on the strategic section. I'm excited to go into this today. Our mission is to make biology easier to engineer. And the way we're really aiming to solve that problem, we believe the bottleneck fundamentally is the laboratory work associated with bioengineering. And so I'm going to dig deep today and talk about why autonomous labs will be replacing the lab bench. I want to highlight some of what we're doing with Nebula, our system because we have some news this month in terms of expanding that system. And then finally, the services that we put on top of Nebula, our cloud lab, data points and solutions.
These are sort of like, we call it, like our Starlink, right? If you think about SpaceX, 70% of the launches last year were actually Starlink, their own internal product, in the coming year, the ability for us to scale up on autonomous lab and showcase that you can make money on services without having people in the middle of the lab doing those laboratory services, I think, is a real highlight and will help drive sales of our systems into the world. So I'm going to talk about all 3, and let's dive in. Okay. So I gave this analogy.
I'm going to do it again because I think this is for new folks listening on the call, it's worth understanding what we say when we say autonomous lab as distinct from traditional lab automation. So I'm going to give an analogy from the transportation industry. On the y-axis here, we have the amount of automation for a certain type of transport. And on the x-axis, the request flexibility. In other words, the users asking the transportation system to do something different or not. And so for a low request flexibility and a high level of automation, that's your subway, right? It's the red line here in Boston, you sit down in the subway, and it takes you away.
You don't have to do anything, it is high level of automation, totally automated transport, but it is very inflexible. You have to want to go to one of the stops on the red line. Low amount of automation, high amount of flexibility. That's a car, right? You get your hands on the wheel, foot on the pedals and it'll take a ride, take a ride to your house or to the grocery store anywhere you want to go. And that's roughly what the transportation system has looked like for the last 100 years. Let's go to the next slide.
You've been to California in the last 4 or 5 years or now L.A. or Austin or soon in Boston and you sat in the back seat of a Waymo, and it is amazing, it is like sitting on a subway, you don't have to do anything, but it will take you right to your house. It has the flexibility of a car. And so it's those 2 things together that sort of flexibility plus a high level of automation that mean we actually give it a new name, we don't call it an automated car, we call it an autonomous car.
Because up until now, you've needed a human being with our brain in the loop in order to manage that amount of flexibility into the system. All right. So if you look at the next slide, you'll see the miles traveled by cars and trucks versus subways and trains in the United States, it's more than 99% in cars and trucks. And that's not because we don't know about subways. It's that we need that flexibility to do our day-to-day lives, that it's required for the transportation that humans need. All right. So now let's go into the lab bench and into the lab. So low amount of flexibility, high amount of automation. We have actually automation in the lab.
It's called a work cell. That's a 3D schematic of a work cell we have here at Ginkgo. Companies like HighRes, Thermo and Biosero make these. And they're like a subway, right? They're great. They're fully automated. They'll do an experiment without a scientist in the loop, but it better be the experiment that you ordered yesterday. It's not going to do a new experiment for you today. Low amount of automation, high amount of flexibility. We have those 2 in the lab. It's called the lab bench. And you, as a scientist are basically the human glue, connecting all of these different devices in the lab together to do whatever protocol you want to do.
And so again, here's the kicker. If you look at research budgets between laboratory work cells, which are used in things in pharma companies like high-throughput screening or combinatorial chemistry or things like that versus at the lab bench it's about 95% plus at the bench. And again, it's not like we don't know about work cells. It's that scientists need flexibility to explore all the different hypotheses they have for discovering a new drug or developing a new crop trait or whatever type of biotechnology they're doing. So this is what we're trying to build at Ginkgo. We're trying to get up to that top right corner and make a Waymo.
We're trying to make an autonomous lab that has the flexibility of the bench so scientists can order whatever experiment they want, but the automation of a work cell. In other words, they don't have to be there to do each and every step and move the samples among the different equipment and program the equipment. They can just hit go and have that protocol run end-to-end for them via the automation, but whatever they want to order that day. That's the target. And look, if you go to the next slide, the value prop here, I think, is very clear for getting rid of the lab bench, massive overhead cost savings.
You've heard a lot about overhead costs at academic research labs and things like that. That's really paying for ultimately, millions of square feet of laboratory space, it's 50 million square feet of laboratory space just in the Boston area. So you can dramatically reduce that. You can increase the research productivity of your human scientists as we need more data from AI, and we'll talk about that in a minute. And then we can enable AI scientists to run these lab-in-the-loop experiments. We just announced a project with OpenAI a few months ago, where GPT5 ran our lab. That's that kind of lab in the loop. Experiments that we're seeing increasingly also in the biopharma industry.
But to put a point on it, like a typical large pharma, biopharma biotech spends $1 billion to $3 billion a year on research, not clinical trials, but spending on 1 million-plus square feet of lab benches. And if you look at their spending within that on automation, it's well below $100 million, usually much less. And frankly, I think those numbers should flip. I think really, the majority of the capital should be going towards automated laboratory work rather than manual benches. And the reason for that is, I think, a relatively straightforward calculation. On the next slide, you can see comparison between a traditional manual lab and an autonomous lab. It's about a threefold space improvement.
When you take all these -- this equipment that's often very spread out in a normal human-operated lab because people need to get around it and safety reasons and all these things. But in an autonomous lab, you can jam all that equipment right next to each other about as tight as you can make it for the arms to work and things like that. So it's about a threefold space reduction. And then -- the manual labs really are just run 40 hours a week, right? I mean, it's when people come in the lab and humans are there, and they got to be there, and that's when you can get people to work.
There's almost never multiple shifts in these types of sort of high-end research labs. It's really a 40-hour work week. And our system here in Nebula is running 168 hours a week. So 24/7, and that's a fourfold improvement in sort of hours available for utilization of your laboratory. So I think a real clear, threefold, fourfold on 2 different axes. It is a clear value driver if you go to the next slide. I think there's little question the ROI is there. I think the big question with autonomous labs is a technical one.
It's how do you get that high level of automation and the high level of flexibility, that top right corner without a human being in the loop, right? It's the same question of the Waymo, how do you get it to navigate all these different environments and different roads without a human in the loop, if you can do it, it's an obvious win. If we can do this in the lab, it's an obvious win. All right. So let me talk through a little bit the design constraints that we focused on at Ginkgo. On the next slide, you can see a work cell, one of those subways, the way it's designed is it's designed against a particular protocol.
So if you're a biopharma company and you want to build a high-throughput screening work cell and you call a traditional automation vendor, they're going to -- first question they're going to ask you is, tell me about your protocol and tell me what throughput you need to run out. How many samples do you want to get through every week? Because they're building you a subway line. It's going to be built to do that protocol for you. But if you're a new facility head who is opening a lab in Central Square here in Cambridge, and you are building it for a scientific lead.
You don't ask that scientific lead, "hey, what's the protocol you're going to run in this lab that 30 of your scientists are going to use to do work?" You say, "what kind of science you're doing?" And very specifically, "what equipment do you want me to install in that lab so that your scientists can be productive over the next 3 to 5 years as they use the lab?" And so it is oriented around the equipment rather than the protocol. And that's a sort of a subtle point, but it has a huge amount of consequences when it comes to how you design the hardware and software that responds to this challenge.
And so if you go to the next slide, you can see our hardware solution here is what we call our rack carts, our reconfigurable automation carts. And we have -- it's basically a robot wrapped around each laboratory device. So we have control over that environment. There's a HEPA filter on the top, which is important for a lot of biological work. We have opportunities for contamination and things like that. We have a 6-axis robotic arm and a piece of magnetic motion track, allows you to, if you go to the next slide, LEGO block these together into ultimately very large setups. And we're at 50-plus right now in the lab and it's growing quickly.
I'll show you some photos in a second. We have 103 racks will be coming online just in about a week, all in one big setup here in Boston. And so, if you go to the next slide, we can -- I want to show -- this is actually a video of the OpenAI protocol. So we had this project with OpenAI, where GPT5 controlled the lab, and we made a video of one of the samples just moving through the various racks for that protocol. And so this just gives you a sense of like how does it work, right?
So you have these tracks and we're able to move like our one sort of constraint on the system is that we pass things in what's called SBS format. So that little rectangle you saw there is like a 3 x 5-inch square -- rectangle and it can contain 96 or this is a 384-well plate or 1586 well plate. It can also carry consumables like tips and other things. And the arm picks up that piece of plastic wear or samples or tips or whatever it might be, and then puts it on to a particular device. So in this case, it's going -- just went through an acoustic liquid handler.
Now it's going on to a bravo, liquid handler. You saw, again, that first thing that got put down there was actually the plastic tips that are now getting picked up by the liquid handler. And the other 2 plates or sort of sample and destination plate. So we're -- in this case, for the OpenAI project, we're picking up some synthetic DNA, and we're putting it into reaction mixes that were designed by the GPT5 model at the time, right? And so again, a key feature here is any device that accepts those SBS plates, we can integrate into the racks. It takes us usually like 1 month to 1.5 months if it's a new device.
We've now got 80-plus devices on there. We're adding new devices all the time. If a customer asks us for one, if we want to add one to Nebula, we just bring it online. So now that plate is going to get shaken up and put on to ultimately thermocycler or the final analytical device to run the qPCR reactions and give a readout on the performance of each one of those samples back to, in this case, an AI model. That data at most of the runs on Nebulas going back to a human scientist, as opposed to an AI scientist.
But we expect there will be a mix of both as science goes forward, we'll have both scientists and their agents ordering experiments on autonomous labs. Go to the next slide. Great thing about this system is it can expand. We started Nebula, I think was about 8 racks, doing NGS like next-generation sequencing prep for our samples here at Ginkgo and expanded ultimately now up to over 100 racks on the system. So let's dig in a little bit. I want to go to the next section. So that was sort of the theory, like how we design the hardware in order to solve some of the challenges of the autonomous lab and what autonomy means.
And now I want to dig in a little bit on Nebula specifically, because I think what's really unique about Ginkgo is we're not just a hardware company. We actually run BSL-2 labs here in Boston and do scientific partnerships with some of the largest biotech, ag biotech, industrial biotech companies in the world. And so we can actually show what it looks like to do real science on a system like this.
And so if you go to the next slide, one of the things I'm quite proud of that we've been able to show in the last quarter is over 100 protocols with more than 30 of them being unique, submitted by scientists, and I'll mention this, but these are not being submitted by automation engineers or experts in robotics. Being submitted onto our system, Nebula here in Boston, which has 50-plus lab devices all integrated together where you can send point-to-point samples from any device to any other device as requested by those scientists.
There is nothing else like Nebula in the world today, doing sort of open-ended science like this at this scale with this number of unique protocols and end users. And it's proof that autonomous labs are feasible. I mean there's work to do and we talked about that. Things break as we are scaling this up. That's for sure but it is evidence, in my view, that this is going to land, like we are going to beat the manually operated lab.
And so if you go to the next slide, I want to walk through a few of the key things that you got to show if you're going to take out one of those laboratory floors at Takeda or Merck or Novartis or whatever or Bayer Crop Science or any of these companies that do a lot of laboratory work. So first, you want to connect 100-plus devices in a single automation setup, all right? So it can't just be 5 or 10. A scientist expects to have access to many different devices in order to do whatever protocol they might read about in a scientific paper this week. And then I think about 100 is the right number.
So we've been able to -- this week, we'll find out, we're turning it on in about 5 days. 105 all -- or 103 racks, all in one big setup. And the reason we can do that is because of that rack productized hardware, that cart I showed you. We just rolled in, they came in off a truck and we rolled another 50 in and those have all gone in the actual install over the last 3 or 4 weeks. So it's pretty fast to put that many new devices on an automated setup. We'll see if it works. Second, we have run 10 now like a 30-plus unique protocols, 100-plus different -- 100-plus total protocols.
But that's kind of -- you got to get in that, I think, 50 to 100 to maybe 200 unique protocols, all running on the autonomous lab at the same time. And we do that with our catalyst. This is our software, our scheduler that we built is a very complicated scheduling problem. It's really easy to mess this up. Biology is very sensitive to timing. Things break all the time as we keep driving the scale up here. So we're getting to do that quick cycle of debugging and improving the system, but that scheduler is really the key piece of software driving that.
And then finally, scientists, scientists, not automation engineers, and I think on a peak day on Nebula, we had 439 or so scientists submitting. So that's really exciting like to have that many different scientists submitting protocols on one automation system. Again, I don't know of any automation system in the world that's been able to do that before. And we're able to do that in part by leveraging AI coding tools with custom harnesses wrapped around them that basically understand how to transfer the scientist's intent in human language into code to operate the autonomous lab. And that is a big unlock. We're very thankful for what's going on with all the coding agents.
That's a real help for improving the ease of use because at the end of the day, to make robots do something, you have to program them. And to walk up to a lab bench and do your work by hand, you don't. So we have to solve this problem. We can't make it so that scientists have to become coders to do their job, and we've really just been giving a gift by these AI coding tools, again, like the Codex and the cloud codes and things like that, that can sit inside other tools that are specific to the automation to get this done.
So those are the 3 big ones, and I'm pretty happy with the progress at all. So if you go to the next slide, as I mentioned several times now, we're going from 50 to 105 racks by the end of this month. It's going to be awesome. It's a really cool system to see people should come out and visit it. If you go to the next slide, that scheduler is not trivial. So this is an example of our scheduler, I think, running 17 or 20 different protocols at the same time. Each color is a different protocol. Each row is a different device on the system. The x-axis is time.
And so you can imagine if you want to add a new protocol to that, and you're like, okay, I need to use the device on row 3, the device on row 7 and the device on row 9 and I need the device on row 3 for the first 3 minutes, then I'm not willing to tolerate up to 1 hour gap then the second device for 15 minutes, then up to a 30-minute gap than the last device. It will check, can it fit you in.
And if it can fit you in, or if it can fit you in by moving a couple of other things that doesn't disrupt them in a way that breaks the protocol, it will fit you in. That's awesome, right? That's very much not how the traditional lab automation, the subways work. They're running a batch, that subway line is showing up at a certain time. You can't just jump and insert yourself in the middle, but you can with our scheduling software here. On the next slide, the little green one, it's hard to see, but that column, third column over from the left over there has the names of all the different scientists submitting.
So I'd really love this. I love that we're seeing different people submitting different orders for protocols every day. It's really exciting. And again, I think it's unique. We're also seeing a lot of energy on the U.S. government side. If you have the next slide, a lot of new policy action here, there's the genesis mission, which we're fortunate to be a part of from the White House to bring AI into the national labs. But there's a big motion right now where we're seeing an increasing amount of drug discovery work moving to China from Kendall Square, I was talking about earlier here in the Boston area.
And that's because simply Chinese scientists are paid 1/3 as much and they're doing equal work to what's happening here in the U.S., like they're just as good. They're just as smart. And so I think if we want to remain competitive, we got to think about doing our research in a fundamentally different way in the United States. I don't think we can just rest on our laurels of having the only smart scientists in the world in this area or at least versus China. And I think that era is over -- firmly over at this point. And so we've got to think about a new way to do it.
I'm pretty heartened to see activity out of the National Science Foundation is funding $100 million for a network of cloud laboratories and autonomous labs. There's a new bill introduced by Senator Young to sort of do more of this cloud labs and autonomous labs. So hopefully, we see more here, but I'm encouraged by what we see already. If you go to the next slide. We're obviously very fortunate.
I had a chance in December to sort of ribbon cut the first our RAC robots going into Pacific Northwest National Labs and signed a new contract for $47 million, much larger autonomous lab set up nearly 100 RACs going in a new building in a couple of years at PNNL. So this is really exciting, and I think sort of highlights the direction I believe our national labs will go, our scientific research in the country. If you go to the next slide, we were lucky to give ARPA-H, a tour of Nebula. We have a great project with them. And the work is accelerated by having these autonomous labs available to our scientist at Ginkgo.
I think this is something that makes a lot of sense for a lot of labs at the National Institute of Health, for example, or NSF-funded labs or academic research universities. They would all be accelerated if our scientific talent could get many more of their hypothesis tested than are today due to the limitation of the manual lab. Next slide. listen, Nebula is showcasing what is possible, and that means that early adopters are getting excited about it. So we are also building autonomous labs for that left end of the chart here, the very earliest adopters, the people that are excited to try this out as a different way as an alternative to their lab benches.
And so we'll keep leaning in there, building those systems as that demand comes in. And we are seeing, if you go to the next slide, a lot of interest. So we've had 600-plus visitors in the first quarter. I'll show it at the end, we have a great -- like a little sign up. You can sign up. We do tours weekly, if any of you want to sign up or listening in, we're very happy to give you a tour. So okay, so that's Nebula and that's the dive on that, right?
Now I want to talk a little bit about our service businesses, Cloud Labs, data points and solutions, which I think of a little bit, like I said, like our Starlink, right? So last year, 70% of the launches at SpaceX were Starlink, if you go to the next slide. That's a huge advantage for SpaceX. That means they get to be creating an asset, a moneymaking asset in the form of Starlink while also getting to test over and over again, their launch platform. And their launch platform, ultimately, I think in their view, is the big product, right, that they can have that sort of transportation layer to space.
But Today, they're 70% of the demand for that platform, right? I see a similar situation with the autonomous lab. We are able to have a big system here in Boston and basically prove out moving over our work from data points, Ginkgo Cloud Lab solutions, even our reagents business onto that platform. And if you go to the next slide, really excited. We got our cloud lab off the ground just in the last quarter, it's really been exciting. This is from The Times of London. Do you want to run an experiment for $39 -- there's a lot to do it for you. Go check out cloud.ginkgo.bio.
You can go in the estimate tab at the top, type in whatever protocol you're interested in. It will look up and see do we have the equipment needed to do your protocol? And if so, it will make an estimate of what the price would be to run that protocol in a cloud lab. And people are, I think, pretty surprised at how inexpensive it can be. And that is a reflection of where all the costs lie in doing lab work, which is in. Manual lab work done, 40 hours a week done at low equipment density, low equipment utilization in laboratories that cost a fortune to run.
That then flows through, and it means all of the CRO services you order and so on are very expensive. We think we can solve that problem through automation and the cloud.ginkgo.bio or cloud lab service is really a great way to do that. If you go to the next slide, this is what OpenAI took advantage of when we did this project where GPT5 ran the lab, and we had an awesome result back in February, we showed that after 6 rounds of design, we had improved the cost of cell-free protein synthesis by 40% over scientific state-of-the-art, that opened a lot of eyes.
I think people weren't really -- we didn't know ahead of time whether the models would even be able to design experiments and interpret data at this level of sophistication. So really excited about that, really excited about future work we're going to be doing to keep proving this out with AI. It's a neat line of work. I would say it's distinct from the autonomous lab, I call this really an AI scientist. Using the autonomous lab, using a cloud lab to get its work done. But it is all a really important thing to watch if you're following kind of how AI is changing science. On the next slide.
Also excited just in the last quarter, 3 new channels coming to our delivery business to our cloud lab and data point service. Amazon Biodiscovery got launched by AWS which is basically a platform to allow you to design antibodies. All 3 of these are sort of in the antibody space, Benchling similarly and then Tamarind Bio. These are -- Tamarind and Amazon are sort of ways for pharma companies to access these frontier bio models. So if you think of things like AlphaFold, which got the Nobel Prize for Demis at Google, those -- that was like one of the earliest protein design models. There's many more now. They're computationally intensive.
They're interesting and they help drug discovery scientists come up with a design for an antibody or a protein for their drug. But then you got to test it, right? Like we don't know if these things work in biology unless you go into the lab. And so the idea is, could you have these layers where you access the latest models and all the compute to power them.
And then when you're ready to do your experiment, you hit a button and it kicks the designs to a cloud lab to do it for you and the data flows back very nicely, well packaged right to the model and you can run that loop as many times as you want. So that's sort of what's going on with Amazon and Tamarind and then Benchling is really the leader in electronic lab notebooks. And it's a similar idea. If you're in ELN as a scientist and you've designed this experiment, could you ultimately hit go and kick it off to a cloud and we partner there with our data point service again around antibodies.
So super exciting to see these. I think this is like early indications of a way that could become a norm for how scientists do their work in the future and kind of order their laboratory experiments. I'll just say a couple of more quick things about data points. Really excited about the progress here, working with 10 of the top biopharma companies in the world just in the first year of running it. It's a good mix of pharma and government and even tech companies and tech bio companies. We've done a nice job on the next slide of really being a community leader here. We're running competitions.
There's a virtual cell pharmacology initiative where we'll actually test compounds for free. People should definitely check that out if you're in the small molecule drug discovery space. So really need opportunities and we hope to see summits and things like that. It's been good. I think AI as applied to the design of drugs is a big area. And with data points, we're sort of operating almost like a scale AI, like creating those just big data packages to train the models. All right. Next slide. We have had a long-standing business in solutions, more than 250 of these research partnerships over the last 10 years.
It's gotten us to work with the R&D groups of some of the largest companies in pharmaceuticals, industrial biotech and agricultural biotech. And uniquely at Ginkgo, it is a huge range of different kinds of research from microbes associated with the roots of corn and trying to engineer them to produce fertilizer to mRNA therapeutics or antibody development and pharmaceuticals to enzymes for industrial biotech, really wide range of different types of genetic engineering and biotech lab work that has happened at Ginkgo in sort of not totally automated way. In other words, not like no people in the lab, but like semi-automated.
So human interacting with a liquid handling robot and a human interacting with various benchtop devices that can take a lot of samples at once. So we were sort of like not all the way to an autonomous lab, but we're doing a lot of variable work for years in semi-automated setups. And so if you go to the next slide, I'm most excited to move this kind of work on to Nebula. It is the hardest work to move, right? This is the stuff that really is that car, I mentioned earlier, the lab bench. It's totally variable. It's really different. It's not just doing the same experiment over and over again like you would in a traditional CRO.
But if you remember my slide, it's where 95% of the spending is going at all of our customers. They spend a bit with us, but they mostly spend on huge internal research labs to do this kind of work. And so we want to replace the manual lab bench, migrating the work from our solutions business onto Nebula is a really critical demonstration. So I'm excited about the progress there. We're trying to share that publicly, and we bring people through. If you go to the next slide, one of the best things we do is we bring people through, show them a lab, let them talk to our scientists, see how scientists are submitting new protocols every day.
And this has been really exciting to bring research leaders from -- I don't know, 3 heads of pharma or ag R&D come through to visit just this year, right, to see the system. And so if you just want to visit, there's the link, you really should come by. But I think Nebula and our services on top of it is a truly unique asset to demonstrate what we think fundamentally is a better way to do biotech R&D. And we would love to get it in at every company out there and replace their benches. So if you go to the next slide, that is the world that I want to see.
And so please, if you're interested, you can e-mail me at jason@ginkgobioworks.com. Happy to follow up and happy to take your questions now. Thank you.
Daniel Marshall: [Operator Instructions] We have one to start off submitted from Brendan at TD. We got it over e-mail. He has 2 questions. So the first one is, how should we think about the potential impact to revenues this year from the AWS and Benchling announcements? How have the launches gone thus far? And what is baked into your assumptions for the rest of 2026 for these new platforms?
Jason Kelly: Yes, I can take that one. So yes, we talked about AWS and Benchling. The other one in that same category as the Tamarind Bio partnership as well. I'm super excited about this. I mean this is the first time I've seen this sort of kind of like cloud layer talking directly to labs as a sales channel. So I'm excited to see where it goes. It is definitely new, right? So like seeing like a flood of inbound there. We are seeing some people are reaching out to us because of the channel, so that's exciting. I'm most excited that it's starting around antibodies, right?
Because that's just kind of naturally there's a number of these AI models associated with antibodies and so on and because there's a few different providers that will do these antibody services for you. But what I'm most excited about is with our cloud lab, we're not limited to testing an antibody binding, right? If you look already on the, I don't know, 8 or 9, 10 protocols we posted, we're posting a new one every week. It's a pretty wide variety of stuff. We're doing mass spec metabolomics, all kinds of things. And so you can come and ask for a protocol and Cloud lab, we'll add it.
I'd love that to turn into a channel straight from an electronic lab notebook or whatever, where a scientist is like, this is the protocol I want, price it. You get a price back from cloud.ginkgo.bio and then you go run your experiment. I think that's a much -- that feels a lot closer to AWS and sort of like what we saw is successful with cloud compute. Than where these are today, which is really much more just in a more narrow lane around antibodies, which I think is an exciting place to start. But I am super excited to fan that out.
I think that -- then it could become really quite an interesting channel and something that scientists just don't have access to today. At the end of the day, you can't get custom stuff done. So I think that's what I am most excited about there.
Daniel Marshall: Cool. All right. Next question from Brendan. What are you hearing on data points and the collective AI-driven offerings with Ginkgo are especially attractive for customers as biotech and pharma companies continue to roll out their own AI capabilities. In other words, what kind of demand dynamics are you seeing here? And are there any potential revenue funnel unlocks we should watch for over the coming quarters from this part of the business?
Jason Kelly: Yes. So I've been super -- I mean we launched data points, almost 1.5 years ago now, and to have 10, the top pharma companies as customers now is really exciting. I think the revenue unlock is just repeat business from those customers. And so we are starting to see that and what we saw, what sort of like pilot projects, data gen projects and then now you've got again because you are seeing people trying to build in-house models. Now remember, like these are not reasoning models. These are not like in-house versions of Claude or Codex or OPUS or whatever or GPT5. They are models trained on biological data. So they're much more specialized.
And so I do think it makes sense actually in the field that you're going to see a lot of people having their own data sets, their own models that are sort of tuned up versions maybe of various protein models. That's not going to be uncommon at all, much more common than I think you'll see in the reasoning model and coding space because these things are very different and people have different data sets. And so I'm, sort of, hopeful as people are building these models, we'll keep seeing the sort of repeat demand as they're like, okay, I found one.
I like what I'm seeing in terms of return on data and performance of my internal model, give me more data. And so that's the revenue unlock. And the more that we see, then I think we become sort of like a default provider that's certainly what happened with scale and other places in the early days of image models and then language models when people saw, oh, I'm seeing performance increase with more data, they turn around and bought more data. That's what we're going to be watching as these protein models and other and it does not disrupt other types of models come out too in the future. I think that's the lane for data points.
Daniel Marshall: Cool. Sort of on the theme of AI, we have someone who is on X who asked us a question. I think this is sort of based on our project with OpenAI. How much efficiency improvements after using GPT 5.5. Any idea for space left for improvement, will this be a transitional factor?
Jason Kelly: Yes. So we had this project and just to remind that we announced back in February with OpenAI, our first project with them, where we had GPT, it's actually not 5.5, it's 5. We started much earlier, and that was when -- that was the model that was out and we kind of kept the same model through the whole thing for like more scientific paper purposes. And so we were able to show over a series of 6 rounds of running the model with 100, 384-well plates designed by GPT5 per round, a 40% improvement over state-of-the-art in the scientific goal we were trying to achieve.
I think there's real interesting questions, a, how much further could you push that, like sort of what is actually diminishing returns look like in some of these scientific areas? Can the model have sort of breakthrough ideas that create really new ways of doing this? TBD. And then as the models have gotten better, yes. And would 5.5 be better than what we got with 5, right? I think that's all going to be exciting stuff to test. So we're excited to do more with OpenAI and we're planning to. And so I think this is an open terrain in terms of how good the reasoning models can be at basically experimental design and experimental analysis.
That's -- those are the 2 things it's really doing. It's like here's an experiment. I want to run, give me back the data cloud lab, autonomous lab, give me back what are the results of my experiments I just designed and then I'm going to analyze them and design more experiments. We'll say, I think it's real exciting to watch what's going to be capable of there. It's a new way to do science. It really is.
Like -- and I won't belabor this too much, but I think it roughly can turn individual -- like the access to a model like that plus an autonomous lab can let individual scientists operate closer to how a principal investigator of an academic lab or a head of a drug discovery group who has lab of 8 people or a lab of 30 people and it's sort of assigning hypotheses to different people and kind of pursuing that over time.
An individual could push that out for probably close to the same cost as they are currently costing to be themselves at a lab bench in terms of their just -- their utility costs and everything else and utilization, low utilization of equipment, they could push out 5 agents on top of an autonomous lab to go pursue a bunch of experiments. That is real exciting if that works. I think it really fundamentally changes the rate we can do science. That's why you see the Genesis mission in the U.S. investing in this sort of stuff because their goal is to 2x the output of U.S. science. That's a way that will do it.
And our science-based industries, of which pharma is the biggest will be completely changed by this. If you can, 2 or 3x the rate, no question about it.
Daniel Marshall: All right. Our next question is really a bundle of questions from DK, who's writing from South Korea. And these questions are all about how the move on to Nebula, our autonomous lab has sort of changed the science that we're doing. So the questions are, how does the use of Ginkgo's automated lab affect overall costs? Are there meaningful differences in speed, for example, turnaround time for experiments? And have you observed improvements in success rates, reproducibility or scalability since moving to the autonomous lab?
Jason Kelly: Yes. So on cost, I tried to touch on this a little bit in the talk. But I think like the clear ROI not just for us but for any one of our customers looking at an autonomous lab, is about a threefold reduction in space utilization compared to a manual lab and a fourfold increase in that time. In other words, like the amount of time the lab is being used to do lab work, right, from that 40 hours to 168, 24/7 week. That's really that -- those improvements is where it's going to yield the cost reduction.
But that is a huge amount because those are really the 2 -- like sort of people time and space time are the 2 big things we spend money on in research. On the speed front, yes, it's interesting. An individual protocol doesn't really get shorter like than necessarily you would do it at the bench. You can imagine ways to do that in the future rebuild protocols differently. But the first thing scientists are going to do is just take work they're doing at the bench and move it on to the autonomous lab. And in that world, it does not need to get faster in terms of like end-to-end time for the protocol.
However, in practice can get faster because you can start a protocol at 4:00 p.m. in the afternoon where you never would have planned to spend the next 7 hours in the lab, kick it off and have the thing run overnight. So in that world, you took an experiment that you would have started tomorrow at 10:00 a.m. and started at 4:00 p.m. and have the results by tomorrow at 8:00 am or 10 a.m. And so that can shave a whole day off. So I think you will see actually a massive speed up because scientists will start taking advantage of the 4x more time that they have available every week.
So if they plan it right, in theory, you can see a fourfold improvement in a lot of the times, depending on how serialized your experiments need to be. So I think that's really exciting in our side, I think I really like that. And then on the -- just sort of like improvement in like I say, I would call this like the quality of the experiments. I think reproducibility is inherently advantaged on automation. And that's mainly to do with like the audit trail. Like you kind of if an instrument errors, if a liquid handler makes mistake, these are all tracked.
So you kind of know like those experiments that you don't catch at the manual lab bench, you catch if there's such a mistake on the autonomous lab. So if you saw really, wow, that's a surprising result. You might go back, look at your experiments and say like, oh, I see what I did there, I like design this experiment in a way that was like a little silly, and that's actually what's giving me this result as opposed to assuming you did the experiment you wanted to do and that was the origin of this like amazing result you got. I think yes, that's a common thing that can happen.
For no nefarious reasons from scientists at the bench. And so I think that -- you will see a big improvement in reproducibility. And then the other thing that got brought up there was throughput, the throughput increase is going to be the same. I think people are surprised when they go to cloud.ginkgo.bio, which I encourage people to do and type in a protocol and see how much it costs. Because I'm basically pricing that protocol based on reagent use and equipment time and a markup on that. And it is not the insane costs that you have when you have a whole team doing this work at the bench, it's just not.
Like -- so if scientists really understood, just how low cost each sample could be in an experiment, and they did -- in order to do many more, they just hit a button rather than have to slave in the lab for 3 days is doing 1,000 experiments. They're going to just order those 1,000 experiments. And so I think you will see an explosion in the amount of data. And this is 100% what happened in every other field that's ever been automated, right? It's like the beginnings of the automation of computation, right?
Like when we went from slide rules to automated computation and explosion in the amount of compute you use, and a massive increase in the return on investment from what people who understood how to design a computation could do. And that's what I want to do for the scientists for drug discovery leads when they have access to an autonomous lab compared to the ROI and the throughput that they can get out of manual labs. It's just going to be no comparison. So yes, I think all 3, you're going to see big gains on. And the cool thing is we're going to keep showing this on Nebula.
So we just had -- Head of R&D here today, and we went through with his team and showed all the gains, and it's -- yes, it's really exciting right now.
Daniel Marshall: So I think we will end on the note, kind of, related to that, which is you guys mentioned in the call, you've mentioned other places you're trying to get to 100 RACs. When do you actually expect to get there?
Jason Kelly: Yes. So it's been pretty fun. We have to put behind-the-scenes videos up, but we have been installing RACs for the last 3 weeks here at Ginkgo. I just showed up on trucks RACs built by our team in Emeryville. And we just added the additional 50. They are all fully connected now in lab. I took a tour of it, it's insane. And so -- and we can run them now, like the original system is running and now the new 50 are running, and there is a connection between the two, and that connection is going to get turned on, I think on the 14th -- next week. So it is imminent.
So I'm really excited to see it all come together. But we already have it up now running as 2 separate loops. So to put in 50 new pieces of equipment in 3 weeks. Again, these are just things that no one's ever done in laboratory automation. So I do think we are doing a very unique thing here at Ginkgo. That's the bet. That's certainly what I'm leaning in on the company. It's what we're investing our capital into. It's where our new customers are coming from and so if you like that idea, I think that is a really exciting time to get involved with the company in any way.
But yes, we're going to be at 100 next week. 103 or 105. I got to count them, yes.
Daniel Marshall: All right. And if you want to follow us on that journey, you can go to X or LinkedIn, Instagram and keep watching. We'll have a lot of content coming about the unveiling of the new full system. And as always, if you have questions, you can reach out to us at investors@ginkgobioworks.com. Thanks so much, everyone, until next time.
Jason Kelly: Thanks, everybody.
Before you buy stock in Ginkgo Bioworks, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Ginkgo Bioworks wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Consider when Netflix made this list on December 17, 2004... if you invested $1,000 at the time of our recommendation, you’d have $475,926!* Or when Nvidia made this list on April 15, 2005... if you invested $1,000 at the time of our recommendation, you’d have $1,296,608!*
Now, it’s worth noting Stock Advisor’s total average return is 981% — a market-crushing outperformance compared to 205% for the S&P 500. Don't miss the latest top 10 list, available with Stock Advisor, and join an investing community built by individual investors for individual investors.
See the 10 stocks »
*Stock Advisor returns as of May 8, 2026.
This article is a transcript of this conference call produced for The Motley Fool. While we strive for our Foolish Best, there may be errors, omissions, or inaccuracies in this transcript. Parts of this article were created using Large Language Models (LLMs) based on The Motley Fool's insights and investing approach. It has been reviewed by our AI quality control systems. Since LLMs cannot (currently) own stocks, it has no positions in any of the stocks mentioned. As with all our articles, The Motley Fool does not assume any responsibility for your use of this content, and we strongly encourage you to do your own research, including listening to the call yourself and reading the company's SEC filings. Please see our Terms and Conditions for additional details, including our Obligatory Capitalized Disclaimers of Liability.
The Motley Fool has no position in any of the stocks mentioned. The Motley Fool has a disclosure policy.