News from the AI & ML world
Joe DeLaere@NVIDIA Technical Blog
//
NVIDIA has unveiled NVLink Fusion, a technology that expands the capabilities of its high-speed NVLink interconnect to custom CPUs and ASICs. This move allows customers to integrate non-NVIDIA CPUs or accelerators with NVIDIA's GPUs within their rack-scale setups, fostering the creation of heterogeneous computing environments tailored for diverse AI workloads. This technology opens up the possibility of designing semi-custom AI infrastructure with NVIDIA's NVLink ecosystem, allowing hyperscalers to leverage the innovations in NVLink, NVIDIA NVLink-C2C, NVIDIA Grace CPU, NVIDIA GPUs, NVIDIA Co-Packaged Optics networking, rack scale architecture, and NVIDIA Mission Control software.
NVLink Fusion enables users to deliver top performance scaling with semi-custom ASICS or CPUs. As hyperscalers are already deploying full NVIDIA rack solutions, this expansion caters to the increasing demand for specialized AI factories, where diverse accelerators work together at rack-scale with maximal bandwidth and minimal latency to support the largest number of users in the most power-efficient way. The advantage of using NVLink for CPU-to-GPU communications is that it offers 14x higher bandwidth compared to PCIe 5.0 (128 GB/s). The technology will be offered in two configurations. The first will be for connecting custom CPUs to Nvidia GPUs.
NVIDIA CEO Jensen Huang emphasized that AI is becoming a fundamental infrastructure, akin to the internet and electricity. He envisions an AI infrastructure industry worth trillions of dollars, powered by AI factories that produce valuable tokens. NVIDIA's approach involves expanding its ecosystem through partnerships and platforms like CUDA-X, which is used across a range of applications. NVLink Fusion is a crucial part of this vision, enabling the construction of semi-custom AI systems and solidifying NVIDIA's role at the center of AI development.
ImgSrc: developer-blogs
References :
- The Register - Software: Nvidia opens up speedy NVLink interconnect to custom CPUs, ASICs
- www.techmeme.com: Nvidia unveils NVLink Fusion, letting customers use its NVLink to pair non-Nvidia CPUs or accelerators with Nvidia's products in their own rack-scale setups (Bloomberg)
- NVIDIA Technical Blog: Integrating Semi-Custom Compute into Rack-Scale Architecture with NVIDIA NVLink Fusion
- Tom's Hardware: Nvidia announces NVLink Fusion to allow custom CPUs and AI Accelerators to work with its products Nvidia's NVLink Fusion program allows customers to use the company’s key NVLink tech for their own custom rack-scale designs with non-Nvidia CPUs or accelerators in tandem with Nvidia’s products.
- Maginative: NVIDIA Opens Its NVLink Ecosystem to Rivals in Bid to Further Cement AI Dominance
- NVIDIA Newsroom: NVIDIA-Powered Supercomputer to Enable Quantum Leap for Taiwan Research
- AI News | VentureBeat: Foxconn builds AI factory in partnership with Taiwan and Nvidia
- www.tomshardware.com: Nvidia teams up with Foxconn to build an AI supercomputer in Taiwan
- The Next Platform: There are many reasons why Nvidia is the hardware juggernaut of the AI revolution, and one of them, without question, is the NVLink memory sharing port that started out on its “Pascal†P100 GOU accelerators way back in 2016.
- www.nextplatform.com: Nvidia Licenses NVLink Memory Ports To CPU And Accelerator Makers
- The Register - Software: Nvidia sets up shop in Taiwan with AI supers and a factory full of ambition
- techvro.com: NVLink Fusion: Nvidia To Sell Hybrid Systems Using AI Chips
- www.networkworld.com: Nvidia opens NVLink to competitive processors
- AIwire: Nvidia’s Global Expansion: AI Factories, NVLink Fusion, AI Supercomputers, and More
Classification: