NVIDIA Opens Its NVLink Ecosystem to Rivals in Bid to Further Cement AI Dominance

NVIDIA Opens Its NVLink Ecosystem to Rivals in Bid to Further Cement AI Dominance

At Computex 2025, CEO Jensen Huang revealed a new program that opens NVIDIA’s prized NVLink interconnect tech to work with non-NVIDIA CPUs and accelerators. It’s called NVLink Fusion—and it cements NVIDIA’s role as the central nervous system of AI infrastructure, even in systems built with someone else’s silicon.

Key Points:

  • NVIDIA's new NVLink Fusion allows non-NVIDIA CPUs and GPUs to integrate with its AI infrastructure
  • The move keeps NVIDIA at the center of AI data centers even when customers use rival chips
  • MediaTek, Marvell, Fujitsu, and Qualcomm are among early partners

The new program allows customers to use CPUs, GPUs, and accelerators from other vendors—think MediaTek, Marvell, or even Qualcomm—alongside NVIDIA GPUs, all tied together using NVIDIA’s high-bandwidth NVLink interconnect. This would have been unthinkable a few years ago, when NVLink was tightly walled off for NVIDIA-only use. But that’s exactly the point. NVIDIA is in such a dominant position in the AI hardware market that it can afford to open the gates—and still run the kingdom.

For NVIDIA, NVLink Fusion represents a masterstroke of strategic positioning. Cloud giants like Google, Microsoft, and Amazon have been developing custom silicon for specific AI workloads, but rather than fight this trend, NVIDIA is co-opting it.

“NVLink Fusion is so that you can build semi-custom AI infrastructure, not just semi-custom chips,” Huang told the audience at Asia's biggest electronics conference.

While it may be a much welcomed olive branch, make no mistake: every “semi-custom” system built this way runs through NVIDIA’s ecosystem. Its GPUs. Its interconnects. Its software stack. The connective tissue belongs to NVIDIA, no matter what processors are involved.

By allowing these companies to integrate their custom chips with NVIDIA's technology, NVIDIA ensures it remains essential to their AI strategies while potentially capturing additional revenue streams.

And the list of companies eager to plug into it is already long: Alchip, Synopsys, Cadence, Astera Labs, MediaTek. Even chipmakers like Fujitsu and Qualcomm are signing on to hook their CPUs into NVIDIA’s GPU core via NVLink Fusion.

NVLink Fusion is just one part of NVIDIA's broader strategy to maintain its dominance in AI compute. During his keynote, Huang also announced the company's next-generation Grace Blackwell systems, the "GB300," which will offer higher overall system performance when released in the third quarter of this year.

Additionally, NVIDIA unveiled a new AI platform called DGX Cloud Lepton, which includes a compute marketplace designed to connect AI developers with tens of thousands of GPUs from a global network of cloud providers. The service aims to address the ongoing supply constraints for high-performance GPU resources that have hindered AI development.

As the AI computing landscape continues to evolve, NVIDIA's NVLink Fusion program suggests the company is embracing a pragmatic approach: if you can't beat them, connect them — and stay at the center of it all.

Chris McKay is the founder and chief editor of Maginative. His thought leadership in AI literacy and strategic AI adoption has been recognized by top academic institutions, media, and global brands.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

Subscribe