News from the AI & ML world

DeeperML - #nvidia

info@thehackernews.com (The@The Hacker News //
A significant security vulnerability, dubbed GPUHammer, has been demonstrated against NVIDIA GPUs, specifically targeting GDDR6 memory. Researchers from the University of Toronto have successfully executed a Rowhammer attack variant on an NVIDIA A6000 GPU, causing bit flips in the memory. This type of attack exploits the physical behavior of DRAM chips, where rapid access to one memory row can induce errors, or bit flips, in adjacent rows. While Rowhammer has been a known issue for CPUs, this marks the first successful demonstration against a discrete GPU, raising concerns about the integrity of data and computations performed on these powerful processors, especially within the burgeoning field of artificial intelligence.

The practical implications of GPUHammer are particularly alarming for machine learning models. In a proof-of-concept demonstration, researchers were able to degrade the accuracy of a deep neural network model from 80% to a mere 0.1% by inducing a single bit flip. This degradation highlights the vulnerability of AI infrastructure, which increasingly relies on GPUs for parallel processing and complex calculations. Such attacks could compromise the reliability and trustworthiness of AI systems, impacting everything from image recognition to complex decision-making processes. NVIDIA has acknowledged these findings and is urging its customers to implement specific security measures to defend against this threat.

In response to the GPUHammer attack, NVIDIA is strongly recommending that customers enable System-level Error Correction Codes (ECC) on their GDDR6 GPUs. ECC is a hardware-level mechanism designed to detect and correct errors in memory, and it has been proven to effectively neutralize the Rowhammer threat. NVIDIA's guidance applies to a wide range of its professional and data center GPU architectures, including Blackwell, Hopper, Ada, Ampere, and Turing. While consumer-grade GPUs may have limited ECC support, the company emphasizes that its enterprise-grade and data center solutions, many of which have ECC enabled by default, are the recommended choice for applications requiring enhanced security assurance. This proactive measure aims to protect users from data tampering and maintain the integrity of critical workloads.

Recommended read:
References :
  • cyberpress.org: GPUHammer: First Rowhammer Exploit Aimed at NVIDIA GPUs
  • The Hacker News: GPUHammer: New RowHammer Attack Variant Degrades AI Models on NVIDIA GPUs
  • Talkback Resources: NVIDIA shares guidance to defend GDDR6 GPUs against Rowhammer attacks
  • BleepingComputer: NVIDIA shares guidance to defend GDDR6 GPUs against Rowhammer attacks
  • Cyber Security News: The hardware security landscape has taken a dramatic turn as researchers have, for the first time, demonstrated a successful Rowhammer attack targeting NVIDIA A6000 GPUs utilizing GDDR6 memory.
  • gbhackers.com: Researchers from the University of Toronto have unveiled the first successful Rowhammer attack on an NVIDIA GPU, specifically targeting the A6000 model equipped with GDDR6 memory.
  • gpuhammer.com: GPUHammer: Rowhammer bit flips on GPU memories, specifically on a GDDR6 memory in an NVIDIA A6000 GPU. Our attacks induce bit flips across all tested DRAM banks, despite in-DRAM defenses like TRR, using user-level CUDA code.
  • www.bleepingcomputer.com: NVIDIA shares guidance to defend GDDR6 GPUs against Rowhammer attacks

@www.nextplatform.com //
References: AWS News Blog , AIwire ,
Nvidia's latest Blackwell GPUs are rapidly gaining traction in cloud deployments, signaling a significant shift in AI hardware accessibility for businesses. Amazon Web Services (AWS) has announced its first UltraServer supercomputers, which are pre-configured systems powered by Nvidia's Grace CPUs and the new Blackwell GPUs. These U-P6e instances are available in full and half rack configurations and leverage advanced NVLink 5 ports to create large shared memory compute complexes. This allows for a memory domain spanning up to 72 GPU sockets, effectively creating a massive, unified computing environment designed for intensive AI workloads.

Adding to the growing adoption, CoreWeave, a prominent AI cloud provider, has become the first to offer NVIDIA RTX PRO 6000 Blackwell GPU instances at scale. This move promises substantial performance improvements for AI applications, with reports of up to 5.6x faster LLM inference compared to previous generations. CoreWeave's commitment to early deployment of Blackwell technology, including the NVIDIA GB300 NVL72 systems, is setting new benchmarks in rack-scale performance. By combining Nvidia's cutting-edge compute with their specialized AI cloud platform, CoreWeave aims to provide a more cost-efficient yet high-performing alternative for companies developing and scaling AI applications, supporting everything from training massive language models to multimodal inference.

The widespread adoption of Nvidia's Blackwell GPUs by major cloud providers like AWS and specialized AI platforms like CoreWeave underscores the increasing demand for advanced AI infrastructure. This trend is further highlighted by Nvidia's recent milestone of becoming the world's first $4 trillion company, a testament to its leading role in the AI revolution. Moreover, countries like Indonesia are actively pursuing sovereign AI goals, partnering with companies like Nvidia, Cisco, and Indosat Ooredoo Hutchison to establish AI Centers of Excellence. These initiatives aim to foster localized AI research, develop local talent, and drive innovation, ensuring that nations can harness the power of AI for economic growth and digital independence.

Recommended read:
References :
  • AWS News Blog: Amazon announces the general availability of EC2 P6e-GB200 UltraServers, powered by NVIDIA Grace Blackwell GB200 superchips that enable up to 72 GPUs with 360 petaflops of computing power for AI training and inference at the trillion-parameter scale.
  • AIwire: CoreWeave, Inc. today announced it is the first cloud platform to make NVIDIA RTX PRO 6000 Blackwell Server Edition instances generally available.
  • The Next Platform: Sizing Up AWS “Blackwell†GPU Systems Against Prior GPUs And Trainiums

@blogs.nvidia.com //
NVIDIA is pushing the boundaries of artificial intelligence through advancements in its RTX AI platform and open-source AI models. The RTX AI platform now accelerates the performance of FLUX.1 Kontext, a groundbreaking image generation model developed by Black Forest Labs. This model allows users to guide and refine the image generation process with natural language, simplifying complex workflows that previously required multiple AI models. By optimizing FLUX.1 Kontext for NVIDIA RTX GPUs using the TensorRT software development kit, NVIDIA has enabled faster inference and reduced VRAM requirements for creators and developers.

The company is also expanding its open-source AI offerings, including the reasoning-focused Nemotron models and the Parakeet speech model. Nemotron, built on top of Llama, delivers groundbreaking reasoning accuracy, while the Parakeet model offers blazing-fast speech capabilities. These open-source tools provide enterprises with valuable resources for deploying multi-model AI strategies and leveraging the power of reasoning in real-world applications. According to Joey Conway, Senior Director of Product Management for AI Models at NVIDIA, reasoning is becoming the key differentiator in AI.

In addition to software advancements, NVIDIA is enhancing AI supercomputing capabilities through collaborations with partners like CoreWeave and Dell. CoreWeave has deployed the first Dell GB300 cluster, utilizing the Grace Blackwell Ultra Superchip. Each rack delivers 1.1 ExaFLOPS of AI inference performance and 0.36 ExaFLOPS of FP8 training performance, along with 20 TB of HBM3E and 40 TB of total RAM. This deployment marks a significant step forward in AI infrastructure, enabling unprecedented speed and scale for AI workloads.

Recommended read:
References :
  • NVIDIA Newsroom: Black Forest Labs, one of the world’s leading AI research labs, just changed the game for image generation.
  • www.tomshardware.com: CoreWeave deploys first Dell GB300 cluster with Switch: Up to 1.1 ExaFLOPS of AI inference performance per rack.

@www.bigdatawire.com //
References: NVIDIA Newsroom , BigDATAwire ,
HPE is significantly expanding its AI capabilities with the unveiling of GreenLake Intelligence and new AI factory solutions in collaboration with NVIDIA. This move aims to accelerate AI adoption across industries by providing enterprises with the necessary framework to build and scale generative, agentic, and industrial AI. GreenLake Intelligence, an AI-powered framework, proactively monitors IT operations and autonomously takes action to prevent problems, alleviating the burden on human administrators. This initiative, announced at HPE Discover, underscores HPE's commitment to providing a comprehensive approach to AI, combining industry-leading infrastructure and services.

HPE and NVIDIA are introducing innovations designed to scale enterprise AI factory adoption. The NVIDIA AI Computing by HPE portfolio combines NVIDIA Blackwell accelerated computing, NVIDIA Spectrum-X Ethernet, and NVIDIA BlueField-3 networking technologies with HPE's servers, storage, services, and software. This integrated stack includes HPE OpsRamp Software and HPE Morpheus Enterprise Software for orchestration, streamlining AI implementation. HPE is also launching the next-generation HPE Private Cloud AI, co-engineered with NVIDIA, offering a full-stack, turnkey AI factory solution.

These new offerings include HPE ProLiant Compute DL380a Gen12 servers with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, providing a universal data center platform for various enterprise AI and industrial AI use cases. Furthermore, HPE introduced the NVIDIA HGX B300 system, the HPE Compute XD690, built with NVIDIA Blackwell Ultra GPUs, expected to ship in October. With these advancements, HPE aims to remove the complexity of building a full AI tech stack, facilitating easier adoption and management of AI factories for businesses of all sizes and enabling sustainable business value.

Recommended read:
References :
  • NVIDIA Newsroom: HPE and NVIDIA Debut AI Factory Stack to Power Next Industrial Shift
  • BigDATAwire: HPE Moves Into Agentic AIOps with GreenLake Intelligence
  • www.itpro.com: HPE's AI factory line just got a huge update

@www.artificialintelligence-news.com //
Hugging Face has partnered with Groq to offer ultra-fast AI model inference, integrating Groq's Language Processing Unit (LPU) inference engine as a native provider on the Hugging Face platform. This collaboration aims to provide developers with access to lightning-fast processing capabilities directly within the popular model hub. Groq's chips are specifically designed for language models, offering a specialized architecture that differs from traditional GPUs by embracing the sequential nature of language tasks, resulting in reduced response times and higher throughput for AI applications.

Developers can now access high-speed inference for multiple open-weight models through Groq’s infrastructure, including Meta’s Llama 4, Meta’s Llama-3 and Qwen’s QwQ-32B. Groq is the only inference provider to enable the full 131K context window, allowing developers to build applications at scale. The integration works seamlessly with Hugging Face’s client libraries for both Python and JavaScript, though the technical details remain refreshingly simple. Even without diving into code, developers can specify Groq as their preferred provider with minimal configuration.

This partnership marks Groq’s boldest attempt yet to carve out market share in the rapidly expanding AI inference market, where companies like AWS Bedrock, Google Vertex AI, and Microsoft Azure have dominated by offering convenient access to leading language models. This marks Groq's third major platform partnership in as many months. In April, Groq became the exclusive inference provider for Meta’s official Llama API, delivering speeds up to 625 tokens per second to enterprise customers. The following mo

Recommended read:
References :
  • venturebeat.com: Groq just made Hugging Face way faster — and it’s coming for AWS and Google
  • www.artificialintelligence-news.com: Hugging Face partners with Groq for ultra-fast AI model inference
  • www.rdworldonline.com: Hugging Face integrates Groq, offering native high-speed inference for 9 major open weight models
  • : Simplicity of Hugging Face + Efficiency of Groq Exciting news for developers and AI enthusiasts! Hugging Face is making it easier than ever to access Groq’s lightning-fast and efficient inference with the direct integration of Groq as a provider on the Hugging Face Playground and API.

Savannah Martin@News - Stability AI //
Nvidia CEO Jensen Huang has publicly disagreed with claims made by Anthropic's chief, Dario Amodei, regarding the potential job displacement caused by artificial intelligence. Amodei suggested that AI could eliminate a significant portion of entry-level white-collar jobs, leading to a sharp increase in unemployment. Huang, however, maintains a more optimistic view, arguing that AI will ultimately create more career opportunities. He criticized Amodei's stance as overly cautious and self-serving, suggesting that Anthropic's focus on AI safety is being used to limit competition and control the narrative around AI development.

Huang emphasized the importance of open and responsible AI development, contrasting it with what he perceives as Anthropic's closed-door approach. He believes that AI technologies should be advanced safely and transparently, encouraging collaboration and innovation. Huang has underscored that fears of widespread job loss are unfounded, anticipating that AI will revolutionize industries and create entirely new roles and professions that we cannot currently imagine.

Nvidia is actively working to make AI more accessible and efficient. Nvidia has collaborated with Stability AI to optimize Stable Diffusion 3.5 models using TensorRT, resulting in significantly faster performance and reduced memory requirements on NVIDIA RTX GPUs. These optimizations extend the accessibility of AI tools to a wider range of users, including creative professionals and developers, fostering further innovation and development in the field. This collaboration provides enterprise-grade image generation to users.

Recommended read:
References :
  • News - Stability AI: Stable Diffusion 3.5 Models Optimized with TensorRT Deliver 2X Faster Performance and 40% Less Memory on NVIDIA RTX GPUs
  • Rashi Shrivastava: The World’s Largest Technology Companies 2025: Nvidia Continues To Soar Amid AI Boom
  • www.tomshardware.com: Nvidia CEO slams Anthropic's chief over his claims of AI taking half of jobs and being unsafe — ‘Don’t do it in a dark room and tell me it’s safe’

Jowi Morales@tomshardware.com //
NVIDIA is partnering with Germany and Deutsche Telekom to build Europe's first industrial AI cloud, a project hailed as one of the most ambitious tech endeavors in the continent. This initiative aims to establish Germany as a leader in AI manufacturing and innovation. NVIDIA's CEO, Jensen Huang, met with Chancellor Friedrich Merz to discuss the new partnerships that will drive breakthroughs on this AI cloud.

This "AI factory," located in Germany, will provide European industrial leaders with the computational power needed to revolutionize manufacturing processes, from design and engineering to simulation and robotics. The goal is to empower European industrial players to lead in simulation-first, AI-driven manufacturing. Deutsche Telekom's CEO, Timotheus Höttges, emphasized the urgency of seizing AI opportunities to revolutionize the industry and secure a leading position in global technology competition.

The first phase of the project will involve deploying 10,000 NVIDIA Blackwell GPUs across various high-performance systems, making it Germany's largest AI deployment. This infrastructure will also feature NVIDIA networking and AI software. NEURA Robotics, a German firm specializing in cognitive robotics, plans to utilize these resources to power its Neuraverse, a network where robots can learn from each other. This partnership between NVIDIA and Germany signifies a critical step towards achieving technological sovereignty in Europe and accelerating AI development across industries.

Recommended read:
References :
  • NVIDIA Newsroom: NVIDIA and Deutsche Telekom Partner to Advance Germany’s Sovereign AI
  • www.artificialintelligence-news.com: NVIDIA helps Germany lead Europe’s AI manufacturing race
  • www.tomshardware.com: Nvidia is building the 'world's first' industrial AI cloud—German facility to leverage 10,000 GPUs, DGX B200, and RTX Pro servers
  • AI News: NVIDIA helps Germany lead Europe’s AI manufacturing race
  • blogs.nvidia.com: NVIDIA and Deutsche Telekom Partner to Advance Germany’s Sovereign AI
  • MSSP feed for Latest: CrowdStrike and Nvidia Add LLM Security, Offer New Service for MSSPs
  • www.verdict.co.uk: Nvidia to develop industrial AI cloud for manufacturers in Europe
  • Verdict: Nvidia to develop industrial AI cloud for manufacturers in Europe
  • insideAI News: AMD Announces New GPUs, Development Platform, Rack Scale Architecture
  • insidehpc.com: AMD Announces New GPUs, Development Platform, Rack Scale Architecture
  • www.itpro.com: Nvidia, Deutsche Telekom team up for "sovereign" industrial AI cloud

Niithiyn Vijeaswaran@Artificial Intelligence //
Nvidia is making significant strides in artificial intelligence with new models and strategic partnerships aimed at expanding its capabilities across various industries. The company is building the world's first industrial AI cloud in Germany, equipped with 10,000 GPUs, DGX B200 systems, and RTX Pro servers. This facility will leverage CUDA-X libraries and RTX and Omniverse-accelerated workloads to serve as a launchpad for AI development and adoption by European manufacturers. Nvidia CEO Jensen Huang believes that physical AI systems represent a $50 trillion market opportunity, emphasizing the transformative potential of AI in factories, transportation, and robotics.

Nvidia is also introducing new AI models to enhance its offerings. The Llama 3.3 Nemotron Super 49B V1 and Llama 3.1 Nemotron Nano 8B V1 are now available in Amazon Bedrock Marketplace and Amazon SageMaker JumpStart, allowing users to deploy these reasoning models for building and scaling generative AI applications on AWS. Additionally, Nvidia's Earth-2 platform features cBottle, a generative AI model that simulates global climate at kilometer-scale resolution, promising faster and more efficient climate predictions. This model reduces data storage needs significantly and enables explicit simulation of convection, improving the accuracy of extreme weather event projections.

Beyond hardware and model development, Nvidia is actively forming partnerships to power AI initiatives globally. In Taiwan, Nvidia is collaborating with Foxconn to build an AI supercomputer, and it is also working with Siemens and Deutsche Telekom to establish the industrial AI cloud in Germany. Nvidia's automotive business is projected to reach $5 billion this year, with potential for further growth as autonomous vehicles become more prevalent. The company's full-stack Drive AV software is now in full production, starting with the Mercedes Benz CLA sedan, demonstrating its commitment to advancing AI-driven driving and related technologies.

Recommended read:
References :
  • Artificial Intelligence: The Llama 3.3 Nemotron Super 49B V1 and Llama 3.1 Nemotron Nano 8B V1 are now available in Amazon Bedrock Marketplace and Amazon SageMaker JumpStart. 
  • AI News | VentureBeat: Nvidia believes physical AI systems are a $50 trillion market opportunity
  • www.tomshardware.com: Nvidia is building the 'world's first' industrial AI cloud—German facility to leverage 10,000 GPUs, DGX B200, and RTX Pro servers

@futurumgroup.com //
NVIDIA is making significant strides in the fields of robotics and climate modeling, leveraging its AI expertise and advanced platforms. At COMPUTEX 2025, NVIDIA announced the latest enhancements to its Isaac robotics platform, including Isaac GR00T N1.5 and GR00T-Dreams, designed to accelerate the development of humanoid robots. These tools focus on streamlining development through synthetic data generation and accelerated training, addressing the critical need for extensive training data. Robotics leaders such as Boston Dynamics and Foxconn have already adopted Isaac technologies, indicating the platform's growing influence in the industry.

NVIDIA's Isaac GR00T-Dreams allows developers to create task-based motion sequences from a single image input, significantly reducing the reliance on real-world data collection. The company has also released simulation frameworks, including Isaac Sim 5.0 and Isaac Lab 2.2, along with Cosmos Reason and Cosmos Predict 2, to further support high-quality training data generation. Blackwell-based RTX PRO 6000 workstations and servers from partners like Dell, HPE, and Supermicro are being introduced to unify robot development workloads from training to deployment. Olivier Blanchard, Research Director at Futurum, notes that these platform updates reinforce NVIDIA's position in defining the infrastructure for humanoid robotics.

In parallel with its robotics initiatives, NVIDIA has unveiled cBottle, a generative AI model within its Earth-2 platform, which simulates global climate at kilometer-scale resolution. This model promises faster, more efficient climate predictions by simulating atmospheric conditions at a detailed 5km resolution. cBottle addresses the limitations of traditional climate models by compressing massive climate simulation datasets, reducing storage requirements by up to 3,000 times. This allows for explicit simulation of convection, driving more accurate projections of extreme weather events and opening new avenues for understanding and anticipating complex climate phenomena.

Recommended read:
References :
  • futurumgroup.com: Olivier Blanchard, Research Director at Futurum, shares insights on how NVIDIA’s Isaac GR00T platform and Blackwell systems aim to accelerate humanoid robotics through simulation, synthetic data, and integrated infrastructure. The post appeared first on .
  • Maginative: NVIDIA’s Earth-2 platform introduces cBottle, a generative AI model simulating global climate at kilometer-scale resolution, promising faster, more efficient climate predictions.

@futurumgroup.com //
References: futurumgroup.com
NVIDIA is significantly advancing the field of humanoid robotics through its Isaac GR00T platform and Blackwell systems. These tools aim to accelerate the development and deployment of robots in manufacturing and other industries. Key to this advancement is NVIDIA's focus on simulation-first, AI-driven methodologies, leveraging synthetic data and integrated infrastructure to overcome the limitations of traditional robot training. This approach is particularly beneficial for European manufacturers seeking to enhance their production processes through AI.

NVIDIA's commitment to AI-powered robotics is evidenced by its substantial investment in hardware and software. The company is constructing the "world's first" industrial AI cloud in Germany, featuring 10,000 GPUs, DGX B200 systems, and RTX Pro servers. This infrastructure will support CUDA-X libraries, RTX, and Omniverse-accelerated workloads, providing a powerful platform for European manufacturers to develop and deploy AI-driven robots. NVIDIA's Isaac GR00T N1.5, an open foundation model for humanoid robot reasoning and skills, is now available, further empowering developers to create adaptable and instruction-following robots.

European robotics companies are already embracing NVIDIA's technologies. Companies such as Agile Robots, Humanoid, Neura Robotics, Universal Robots, Vorwerk and Wandelbots are showcasing AI-driven robots powered by NVIDIA's platform. NVIDIA is also releasing new models and tools, including NVIDIA Halos, a safety system designed to unify hardware, AI models, software, tools, and services, to promote safety across the entire development lifecycle of AI-driven robots. By addressing both the performance and safety aspects of robotics, NVIDIA is positioning itself as a key player in the future of AI-powered automation.

Recommended read:
References :
  • futurumgroup.com: Olivier Blanchard, Research Director at Futurum, shares insights on how NVIDIA’s Isaac GR00T platform and Blackwell systems aim to accelerate humanoid robotics through simulation, synthetic data, and integrated infrastructure.

Anton Shilov@tomshardware.com //
Nvidia CEO Jensen Huang recently highlighted the significant advancements in artificial intelligence, stating that AI capabilities have increased a millionfold in the last decade. Huang attributed this rapid growth to improvements in GPU performance and system scaling. Speaking at London Tech Week, Huang emphasized the "incredible" speed of industry change and also met with U.K. Prime Minister Keir Starmer to discuss integrating AI into national economic planning through strategic investments in infrastructure, talent development, and government-industry collaborations.

Huang also introduced NVIDIA's Earth-2 platform, featuring cBottle, a generative AI model designed to simulate the global climate at kilometer-scale resolution. This innovative model promises faster and more efficient climate predictions by simulating atmospheric conditions at a detailed 5km resolution, utilizing advanced diffusion modeling to generate realistic atmospheric states based on variables like time of day, year, and sea surface temperatures. The cBottle model can compress massive climate simulation datasets, reducing storage requirements by up to 3,000 times for individual weather samples.

The key advantage of cBottle lies in its ability to explicitly simulate convection, which drives thunderstorms, hurricanes, and rainfall, instead of relying on simplified equations used in traditional models. This enhances the accuracy of extreme weather event projections, which are often uncertain in coarser-scale models. Furthermore, cBottle can fill in missing or corrupted climate data, correct biases in existing models, and enhance low-resolution data through super-resolution techniques, making high-resolution climate modeling more accessible and efficient.

Recommended read:
References :
  • www.tomshardware.com: At London Tech Week, Nvidia CEO Jensen Huang claimed that AI has advanced a millionfold over the past decade, likely referencing explosive growth in GPU performance and system scale.
  • Maginative: NVIDIA’s Earth-2 platform introduces cBottle, a generative AI model simulating global climate at kilometer-scale resolution, promising faster, more efficient climate predictions.
  • www.tomshardware.com: Nvidia is building the 'world's first' industrial AI cloud—German facility to leverage 10,000 GPUs, DGX B200, and RTX Pro servers

@techinformed.com //
NVIDIA CEO Jensen Huang and UK Prime Minister Keir Starmer have recently joined forces at London Tech Week to solidify the UK's position as a leading force in AI. This collaborative effort aims to bolster the nation's digital infrastructure and promote AI development across various sectors. Starmer has committed £1 billion in investment to supercharge the AI sector, emphasizing the UK's ambition to be at the forefront of AI innovation rather than simply consuming the technology. Huang highlighted the UK's rich AI community, world-class universities, and significant AI capital investment as key factors positioning it for success.

The partnership includes the establishment of a dedicated NVIDIA AI Technology Center in the UK. This center will provide hands-on training in crucial areas such as AI, data science, and accelerated computing, with a specific focus on nurturing talent in foundation model building, embodied AI, materials science, and earth systems modeling. The initiative aims to tackle the existing AI skills gap, ensuring that the UK has a workforce capable of leveraging the new infrastructure and technologies being developed. Cloud providers are also stepping up with significant GPU deployments, with Nscale planning 10,000 NVIDIA Blackwell GPUs by late 2026 and Nebius revealing plans for an AI factory with 4,000 NVIDIA Blackwell GPUs.

Furthermore, the UK's financial sector is set to benefit from this AI push. The Financial Conduct Authority (FCA) is launching a 'supercharged sandbox' scheme, allowing banks and other City firms to experiment safely with NVIDIA AI products. This initiative aims to speed up innovation and boost UK growth by integrating AI into the financial sector. Potential applications include intercepting authorized push payment fraud and identifying stock market manipulation, showcasing the potential of AI to enhance customer service and data analytics within the financial industry.

Recommended read:
References :

@www.artificialintelligence-news.com //
The UK government is launching a nationwide initiative to boost AI skills among workers and schoolchildren, solidifying its position as a leader in AI development and innovation. Prime Minister Keir Starmer announced a partnership with tech giants like NVIDIA, Google, Microsoft, and Amazon to train 7.5 million workers in artificial intelligence skills. The program will provide freely available training materials to businesses over the next five years, focusing on practical applications such as using chatbots and large language models to enhance productivity. This initiative aims to equip the UK workforce with the necessary skills to thrive in an increasingly AI-driven economy.

As part of this comprehensive effort, all civil servants in England and Wales will receive practical AI training starting this autumn to enhance their work efficiency. The government aims to integrate AI into various aspects of public service, streamlining operations and improving productivity. Officials are already piloting AI tools, such as "Humphrey," named after the character from "Yes, Minister," to automate tasks and reduce the time spent on routine processes. The goal is to ensure that AI handles tasks where it can perform better, faster, and to the same high quality, freeing up civil servants for more complex and strategic work.

To support this AI skills drive, the government is also focusing on bolstering the UK's AI infrastructure. NVIDIA is investing in the UK, establishing an AI Technology Center to provide hands-on training in AI, data science, and accelerated computing. Cloud providers like Nscale and Nebius are deploying thousands of NVIDIA GPUs to enhance computational capabilities for research bodies, universities, and public services. The Prime Minister has pledged to invest approximately £1 billion in AI research compute by 2030, signaling a commitment to turning Britain into an AI superpower and attracting tech investment to stimulate economic growth.

Recommended read:
References :
  • techxplore.com: UK launches AI skills drive for workers and schoolchildren
  • www.theguardian.com: All civil servants in England and Wales to get AI training
  • NVIDIA Newsroom: UK Prime Minister, NVIDIA CEO Set the Stage as AI Lights Up Europe
  • techinformed.com: Nvidia can boost UK’s digital infrastructure, says Huang as Starmer promises £1bn for AI
  • www.artificialintelligence-news.com: UK tackles AI skills gap through NVIDIA partnership
  • blogs.nvidia.com: U.K. Prime Minister Keir Starmer’s ambition for Britain to be an “AI maker, not an AI taker,” is becoming a reality at London Tech Week.
  • ComputerWeekly.com: Starmer opens London Tech Week with £1bn AI boost
  • NVIDIA Newsroom: ‘AI Maker, Not an AI Taker’: UK Builds Its Vision With NVIDIA Infrastructure
  • Dataconomy: NVIDIA CEO Jensen Huang and U.K. Prime Minister Sir Keir Starmer opened London Tech Week at Olympia, signaling a national policy shift toward AI with investments in people, platforms, and partnerships. The U.K. will invest approximately £1 billion in AI research compute by 2030, starting this year.
  • insidehpc.com: LRZ to Acquire HPE-NVIDIA ‘Blue Lion’ Supercomputer

@techinformed.com //
NVIDIA is significantly investing in the UK to bolster its AI infrastructure and address the growing skills gap in the field. CEO Jensen Huang has pledged support for building the necessary infrastructure to power AI advancements across the nation. This commitment involves establishing an AI Technology Center in the UK, providing hands-on training in key areas like AI, data science, and accelerated computing. The focus will be on supporting foundation model builders, embodied AI, materials science, and earth systems modeling, ensuring the UK has the talent pool to capitalize on AI opportunities.

The financial sector is set to benefit from this partnership through a new AI-powered sandbox initiative by the Financial Conduct Authority (FCA). This sandbox will enable banks and other City firms to safely experiment with NVIDIA's AI technology under the regulator's supervision. NayaOne will provide the infrastructure, with NVIDIA supplying the technological backbone. This initiative aims to "speed up innovation" and fulfill government objectives to boost UK growth. The FCA's support of AI comes as they encourage more risk-taking across the City to help spur growth and competitiveness.

This collaboration is intended to help firms harness AI to benefit the UK markets and consumers, while also supporting broader economic growth. Sumant Kumar, CTO for Banking & Financial Markets at NTT DATA UK&I, highlighted the potential of this "supercharged sandbox" to help banks achieve viable AI prototypes. Cloud providers like Nscale and Nebius are also contributing to the UK's AI capabilities by deploying thousands of NVIDIA Blackwell GPUs, offering much-needed computational power for research, universities, and public services, including the NHS.

Recommended read:
References :

@www.linkedin.com //
Nvidia has once again asserted its dominance in the AI training landscape with the release of the MLPerf Training v5.0 results. The company's Blackwell GB200 accelerators achieved record time-to-train scores, showcasing a significant leap in performance. This latest benchmark suite included submissions from various companies, but Nvidia's platform stood out, particularly in the most demanding large language model (LLM)-focused test involving Llama 3.1 405B pretraining. These results underscore the rapid growth and evolution of the AI field, with the Blackwell architecture demonstrably meeting the heightened performance demands of next-generation AI applications.

The MLPerf Training v5.0 results highlight Nvidia's commitment to versatility, as it was the only platform to submit results across every benchmark. The at-scale submissions leveraged two AI supercomputers powered by the Blackwell platform: Tyche, built using GB200 NVL72 rack-scale systems, and Nyx, based on DGX B200 systems. Additionally, Nvidia collaborated with CoreWeave and IBM, utilizing a cluster of 2,496 Blackwell GPUs and 1,248 Grace CPUs. The new Llama 3.1 405B pretraining benchmark witnessed Blackwell delivering 2.2x greater performance compared to the previous generation architecture at the same scale.

The performance gains are attributed to advancements in the Blackwell architecture, encompassing high-density liquid-cooled racks, 13.4TB of coherent memory per rack, and fifth-generation NVLink and NVLink Switch interconnect technologies for scale-up, as well as Quantum-2 InfiniBand networking for scale-out. These technological innovations, combined with the NVIDIA NeMo Framework software stack, are raising the bar for next-generation multimodal LLM training. While AMD did showcase generational performance gains, Nvidia's GPUs reigned supreme, outpacing AMD's MI325X in MLPerf benchmarks, solidifying Nvidia's position as a leader in AI training capabilities.

Recommended read:
References :
  • NVIDIA Newsroom: NVIDIA Blackwell Delivers Breakthrough Performance in Latest MLPerf Training Results
  • MLCommons: New MLCommons MLPerf Training v5.0 Benchmark Results Reflect Rapid Growth and Evolution of the Field of AI
  • IEEE Spectrum: Nvidia’s Blackwell Conquers Largest LLM Training Benchmark
  • www.aiwire.net: Blackwell GPUs Lift Nvidia to the Top of MLPerf Training Rankings
  • www.servethehome.com: MLPerf Training v5.0 is Out
  • IEEE Spectrum: Ideal networking aids performance of the largest submissions to the LLM fine-tuning benchmarks, the system with the largest number of GPUs was submitted by Nvidia, a computer connecting 512 B200s.
  • ServeTheHome: The new MLPerf Training v5.0 are dominated by NVIDIA Blackwell and Hopper results, but we also get AMD Instinct MI325X on a benchmark as well

@www.linkedin.com //
Nvidia's Blackwell GPUs have achieved top rankings in the latest MLPerf Training v5.0 benchmarks, demonstrating breakthrough performance across various AI workloads. The NVIDIA AI platform delivered the highest performance at scale on every benchmark, including the most challenging large language model (LLM) test, Llama 3.1 405B pretraining. Nvidia was the only vendor to submit results on all MLPerf Training v5.0 benchmarks, highlighting the versatility of the NVIDIA platform across a wide array of AI workloads, including LLMs, recommendation systems, multimodal LLMs, object detection, and graph neural networks.

The at-scale submissions used two AI supercomputers powered by the NVIDIA Blackwell platform: Tyche, built using NVIDIA GB200 NVL72 rack-scale systems, and Nyx, based on NVIDIA DGX B200 systems. Nvidia collaborated with CoreWeave and IBM to submit GB200 NVL72 results using a total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs. The GB200 NVL72 systems achieved 90% scaling efficiency up to 2,496 GPUs, improving time-to-convergence by up to 2.6x compared to Hopper-generation H100.

The new MLPerf Training v5.0 benchmark suite introduces a pretraining benchmark based on the Llama 3.1 405B generative AI system, the largest model to be introduced in the training benchmark suite. On this benchmark, Blackwell delivered 2.2x greater performance compared with the previous-generation architecture at the same scale. Furthermore, on the Llama 2 70B LoRA fine-tuning benchmark, NVIDIA DGX B200 systems, powered by eight Blackwell GPUs, delivered 2.5x more performance compared with a submission using the same number of GPUs in the prior round. These performance gains highlight advancements in the Blackwell architecture and software stack, including high-density liquid-cooled racks, fifth-generation NVLink and NVLink Switch interconnect technologies, and NVIDIA Quantum-2 InfiniBand networking.

Recommended read:
References :
  • NVIDIA Newsroom: NVIDIA Blackwell Delivers Breakthrough Performance in Latest MLPerf Training Results
  • NVIDIA Technical Blog: NVIDIA Blackwell Delivers up to 2.6x Higher Performance in MLPerf Training v5.0
  • IEEE Spectrum: Nvidia’s Blackwell Conquers Largest LLM Training Benchmark
  • NVIDIA Technical Blog: Reproducing NVIDIA MLPerf v5.0 Training Scores for LLM Benchmarks
  • AI News | VentureBeat: Nvidia says its Blackwell chips lead benchmarks in training AI LLMs
  • blogs.nvidia.com: NVIDIA RTX Blackwell GPUs Accelerate Professional-Grade Video Editing
  • MLCommons: New MLCommons MLPerf Training v5.0 Benchmark Results Reflect Rapid Growth and Evolution of the Field of AI
  • www.aiwire.net: MLPerf Training v5.0 results show Nvidia’s Blackwell GB200 accelerators sprinting through record time-to-train scores.
  • blogs.nvidia.com: NVIDIA is working with companies worldwide to build out AI factories — speeding the training and deployment of next-generation AI applications that use the latest advancements in training and inference. The NVIDIA Blackwell architecture is built to meet the heightened performance requirements of these new applications. In the latest round of MLPerf Training — the
  • mlcommons.org: New MLCommons MLPerf Training v5.0 Benchmark Results Reflect Rapid Growth and Evolution of the Field of AI
  • NVIDIA Newsroom: NVIDIA RTX Blackwell GPUs Accelerate Professional-Grade Video Editing
  • ServeTheHome: The new MLPerf Training v5.0 are dominated by NVIDIA Blackwell and Hopper results, but we also get AMD Instinct MI325X on a benchmark as well
  • AIwire: This is a news article on nvidia Blackwell GPUs lift Nvidia to the top of MLPerf Training Rankings
  • IEEE Spectrum: Nvidia’s Blackwell Conquers Largest LLM Training Benchmark
  • www.servethehome.com: MLPerf Training v5.0 is Out