On December 1, 2025, NVIDIA formally announced its strategic investment in Synopsys, acquiring approximately 4.8 million shares of common stock at a price of $414.79 per share for a total of $2 billion. This transaction grants NVIDIA a roughly 2.6% stake, making it the seventh-largest shareholder. Concurrently, the two companies announced an expanded multi-year strategic partnership to deeply integrate NVIDIA’s AI hardware, CUDA, and GPU-accelerated computing with Synopsys’s Electronic Design Automation (EDA) tools. The collaboration aims to jointly develop next-generation tools powered by AI for chip design, engineering simulation, and digital twins, drastically accelerating complex chip and system-level design workflows. This move is widely viewed as a significant step by NVIDIA to vertically integrate the chip design supply chain and solidify its AI ecosystem hegemony.
Following the announcement, Synopsys’s stock price rallied, closing up approximately 5% during the December 1st regular trading session, and spiking over 10% in pre-market trading. The stock has now largely moved past the trough of $387, which was triggered by a disappointing financial guidance earlier, although it still has approximately 40% upside potential to reach its year-high of $645.
NVIDIA’s investment is not a typical open-market purchase but a direct strategic subscription agreement with Synopsys, effectively an "official certification" bestowed by Jensen Huang himself. Crucially, over 90% of the design flow for NVIDIA’s flagship GPUs—including Blackwell, Rubin, and Vera—runs on Synopsys's tools. Huang is essentially investing in the "design parent" of his own chips.
Synopsys's stock has significantly underperformed the S&P 500 in 2025, falling nearly 10% year-to-date, pressured by a confluence of factors: integration concerns post-Ansys acquisition, cyclical volatility in the EDA sector, and the collective market panic over "AI capital expenditure peaking." This $2 billion investment directly challenges the "AI Capex is topping out" narrative. NVIDIA's action is a powerful declaration that the demand for AI-driven chip design and simulation tools, and by extension, GPU acceleration, is just beginning its explosive growth phase.
Synopsys's most valuable asset has never been its IP cores, but its comprehensive chip design toolchain, covering the full spectrum from front-end logic synthesis and physical verification to multi-physics simulation enhanced by Ansys. Traditionally, completing a sign-off or a full digital twin simulation of an advanced-process chip or an automobile could require tens of thousands of CPU cores running for several weeks.
Through this partnership, NVIDIA will deeply embed its CUDA and full-stack GPU acceleration libraries across the entire Synopsys product line, with performance improvements in single tasks expected to range from 5x to 50x. Moving forward, the "optimal performance path" in Synopsys's tools will default to NVIDIA GPUs, while the pure CPU path will become progressively slower and more costly.
This means the NVIDIA GPU is becoming the "New Moore’s Law" for Synopsys. For the past two decades, Synopsys's performance was driven by process node iterations; for the next two decades, it will be powered by "GPU acceleration multiples." Customers who reject GPU acceleration risk having a design cycle 3x–10x slower than competitors, ultimately leading to market irrelevance. Synopsys thus gains the "pricing power over chip design efficiency." This dominant lead in next-generation hyperscale digital logic synthesis and verification is the core technical logic behind Huang’s heavy investment in Synopsys over its rival, Cadence (which maintains strong barriers in analog/mixed-signal, advanced packaging, and custom circuits, ensuring continued competition in different arenas).
Current data from TradingKey shows Synopsys trading at an estimated P/E ratio of ~34x, placing it in a historically undervalued range and significantly below its peer, Cadence. As GPU acceleration becomes fully deployed, the traditional revenue structure—approximately 70% from license and time-based access, and 30% from maintenance services—will be disrupted. A new, high-margin revenue stream from "GPU Acceleration Cloud Services Fees" will become a powerful growth engine. Customers will be willing to pay an additional 30%–100% to shorten a sign-off cycle from two weeks to two days. Given the extremely high gross margin of this incremental business, the market is likely to re-anchor Synopsys to a rational P/E range of 45x–55x.

NVIDIA's architectures—Blackwell, Rubin, Vera, and beyond—are themselves designed using EDA software. As chip processes approach physical limits (2nm, 1.6nm), the complexity of design increases exponentially. The investment in Synopsys allows NVIDIA to dictate that Synopsys’s tools prioritize deep, fundamental optimization for NVIDIA's CUDA architecture and hardware. This is expected to reduce the R&D cycle and improve the yield of NVIDIA's next-generation chips.
Currently, NVIDIA’s GPU demand primarily stems from cloud vendors' large model training/inference and consumer gaming. However, high-end manufacturing sectors—including semiconductors, automotive, aerospace, 5G base stations, batteries, wind turbine blades, ships, and high-speed rail—have traditionally relied on CPU clusters for weeks or even months of computation. This is an independent market segment valued well over a trillion dollars.
This partnership effectively presses the GPU acceleration "start button" for the entire engineering design industry. A rough estimate suggests that once the global top 200 manufacturers fully migrate critical sign-off, CFD (Computational Fluid Dynamics), FEA (Finite Element Analysis), and electromagnetic simulations to GPUs, it will create new compute demand equivalent to 500,000 to 800,000 H100/Blackwell-class GPUs—an increment potentially larger than the total new demand from all cloud vendors combined. From now on, GPUs will serve not only large models but every top manufacturer striving to build a more advanced product.
In 2025, NVIDIA has strategically invested in or deeply tied up key nodes such as OpenAI, Anthropic, xAI, Intel, and Synopsys. These funds are never purely for financial return; they aim to weave an NVIDIA network across every link of the AI value chain—from atomic chip design and wafer fabrication, through GPU hardware and cloud computing, to large model training, robotics, autonomous driving, and enterprise applications.
In the future, regardless of who designs a chip, trains a model, or engineers a plane or car, they will inevitably encounter and rely on NVIDIA GPU acceleration and the compatible manufacturing and computing ecosystem. A "GPU Acceleration Tax" must be paid at every layer. NVIDIA’s moat has evolved from mere "technological leadership" to "industry rule-setting." It is no longer selling a product; it is selling the foundational logic of the entire era.
NVIDIA’s $2 billion investment in Synopsys is a critical step in building its AI empire's closed loop. For Synopsys: NVIDIA's endorsement confirms its status as a core AI chip infrastructure provider. The investment transforms GPU acceleration into the EDA tool’s "New Moore’s Law," compelling customers to pay high-margin "acceleration subscription fees," thereby restructuring Synopsys’s valuation. For NVIDIA: The move not only accelerates its internal GPU iteration but also expands the GPU demand pool into a trillion-dollar new industrial design blue ocean. This completes the end-to-end lock-in of the AI value chain, from chip design to final application, firmly establishing NVIDIA as the "Rule Maker" of the AI industry.