Jowi Morales@tomshardware.com
//
NVIDIA is partnering with Germany and Deutsche Telekom to build Europe's first industrial AI cloud, a project hailed as one of the most ambitious tech endeavors in the continent. This initiative aims to establish Germany as a leader in AI manufacturing and innovation. NVIDIA's CEO, Jensen Huang, met with Chancellor Friedrich Merz to discuss the new partnerships that will drive breakthroughs on this AI cloud.
This "AI factory," located in Germany, will provide European industrial leaders with the computational power needed to revolutionize manufacturing processes, from design and engineering to simulation and robotics. The goal is to empower European industrial players to lead in simulation-first, AI-driven manufacturing. Deutsche Telekom's CEO, Timotheus Höttges, emphasized the urgency of seizing AI opportunities to revolutionize the industry and secure a leading position in global technology competition. The first phase of the project will involve deploying 10,000 NVIDIA Blackwell GPUs across various high-performance systems, making it Germany's largest AI deployment. This infrastructure will also feature NVIDIA networking and AI software. NEURA Robotics, a German firm specializing in cognitive robotics, plans to utilize these resources to power its Neuraverse, a network where robots can learn from each other. This partnership between NVIDIA and Germany signifies a critical step towards achieving technological sovereignty in Europe and accelerating AI development across industries. References :
Classification:
Jim McGregor,@Tirias Research
//
Advanced Micro Devices Inc. has launched its new AMD Instinct MI350 Series accelerators, designed to accelerate AI data centers and outperform Nvidia Corp.’s Blackwell B200 in specific tasks. The MI350 series includes the top-end MI355X, a liquid-cooled card, along with the MI350X which uses fans instead of liquid cooling. These new flagship data center graphics cards boast an impressive 185 billion transistors and are based on a three-dimensional, 10-chiplet design to enhance AI compute and inferencing capabilities.
The MI350 Series introduces significant performance improvements, achieving four times faster AI compute and 35 times faster inferencing compared to previous generations. These accelerators ship with 288 gigabytes of HBM3E memory, which features a three-dimensional design in which layers of circuits are stacked atop one another. According to AMD, the MI350 series features 60% more memory than Nvidia’s flagship Blackwell B200 graphics cards. Additionally, the MI350 chips can process 8-bit floating point numbers 10% faster and 4-bit floating point numbers more than twice as fast as the B200. AMD is also rolling out its ROCm 7 software development platform for the Instinct accelerators and the Helios Rack AI platform. "With flexible air-cooled and direct liquid-cooled configurations, the Instinct MI350 Series is optimized for seamless deployment, supporting up to 64 GPUs in an air-cooled rack and up to 128 in a direct liquid-cooled and scaling up to 2.6 exaFLOPS of FP4 performance," stated Vamsi Boppana, the senior vice president of AMD’s Artificial Intelligence Group. The advancements aim to provide an open, scalable rack-scale AI infrastructure built on industry standards, setting the stage for transformative AI solutions across various industries. References :
Classification:
@insidehpc.com
//
MiTAC Computing Technology and AMD are strengthening their partnership to deliver cutting-edge solutions for AI, HPC, cloud-native, and enterprise applications. MiTAC will showcase this collaboration at COMPUTEX 2025, highlighting their shared vision for scalable and energy-efficient technologies. This partnership, which began in 2002, leverages AMD EPYC processors and Instinct GPUs to meet the evolving demands of modern data centers. Rick Hwang, President of MiTAC Computing Technology, emphasized their excitement in advancing server solutions powered by AMD's latest processors and GPUs, stating that it's key to unlocking new capabilities for their global customer base in AI and HPC infrastructure.
Specifically, MiTAC and AMD are developing next-generation server platforms. One notable product is an 8U server equipped with dual AMD EPYC 9005 Series processors and support for up to 8 AMD Instinct MI325X GPUs, offering exceptional compute density and up to 6TB of DDR5-6400 memory, ideal for large-scale AI model training and scientific applications. Additionally, they are offering a 2U dual-socket GPU server, that supports up to four dual-slot GPUs with 24 DDR5-6400 RDIMM slots and tool-less NVMe storage carriers, it offers high-speed throughput and flexibility for deep learning and HPC environments. Meanwhile, Nvidia is preparing to compete with Huawei in the Chinese AI chip market by releasing a budget-friendly AI chip. This strategy is driven by the need to maintain relevance in the face of growing domestic competition and navigate export restrictions. The new chip, priced between $6,500 and $8,000, represents a significant cost reduction compared to the previously banned H20 model. This reduction involves trade-offs, such as using Nvidia's RTX Pro 6000D foundation with standard GDDR7 memory and foregoing Taiwan Semiconductor's advanced CoWoS packaging technology. References :
Classification:
|
BenchmarksBlogsResearch Tools |