Nvidia’s investment in specialized semiconductor vendor Marvell Technology is an indication that the AI hardware and software vendor is paying attention as the market focuses on specialized AI chips instead of generic GPUs.
As part of its investment, Nvidia will integrate Marvell’s specialized XPU chips into its AI Factory environment, enabling customers to build their own AI infrastructure. Marvell’s networking tools will also be compatible with Nvidia NVLink Fusion platform, which enables hyperscalers, cloud providers and ASIC designers to integrate their own specialized CPUs and XPUs with Nvidia’s NVLink interconnect and GPU technology. Nvidia customers can also now combine Marvell’s custom chips with Nvidia’s GPUs, CPUs and networking stacks.
With the partnership, the two vendors said they can transform 5G and 6G telecommunication networks into AI-ready infrastructure using the Nvidia Aerial AI-RAN (radio access network) platform, a GPU-accelerated system that supports AI inferencing with mobile data.
The partnership is an indication that Nvidia is aware that vendors such as AWS, Google and Microsoft are designing their own AI chips to reduce their dependence on Nvidia’s GPUs, said Brendan Burke, an analyst at Futurum Group. ChatGPT maker OpenAI has also partnered with AI chip startup Cerebras to avoid being completely dependent on Nvidia, despite its $100 billion compute deal with Nvidia. And with Marvell Technology boasting its own custom chips and networking products, it could also stand as a competitor to Nvidia.
Filling in the Gaps
So, the deal enables Nvidia to fill gaps in its AI chip and networking offerings, Burke said.
“Nvidia customers are demanding optical interconnect options, and this is a part of the supply chain Nvidia does not control already,” Burke said. Optical interconnects are systems that use light or photons instead of electrical signals or electrons to communicate data between chips and data centers.
“Marvell’s optical expertise is a driver of customer interest in its XPU designs, positioning Nvidia to benefit from optical scale-out across the data center,” Burke said.
For Marvell, this allows it to “integrate with NVLink to support customers like AWS that have built the fabric into their Trainium4 roadmap,” Burke continued, referring to AWS’s next generation of custom AI accelerators for high-performance training and inference of large AI models.
He added that this also appears to be a strategic investment for Nvidia, since it follows Marvell’s acquisition of Celestial AI, a silicon photonics vendor it acquired earlier this month.
However, while the partnership bolsters Marvell’s position in AI networking, the vendor competes with companies such as Broadcom, which offers Ultra Ethernet, a more open alternative to Nvidia NVLink. Customers who value openness might lean toward the vendor rather than Marvell.
Also, Burke noted, the “partnership won’t stop Marvell from building competitive networking products, so the partnership could backfire on Nvidia. “If Nvidia can take a lead in scale-up photonic interconnect with Celestial’s optical engines, it will be worth the risk.”

