@www.bigdatawire.com
//
References:
NVIDIA Newsroom
, BigDATAwire
,
HPE is significantly expanding its AI capabilities with the unveiling of GreenLake Intelligence and new AI factory solutions in collaboration with NVIDIA. This move aims to accelerate AI adoption across industries by providing enterprises with the necessary framework to build and scale generative, agentic, and industrial AI. GreenLake Intelligence, an AI-powered framework, proactively monitors IT operations and autonomously takes action to prevent problems, alleviating the burden on human administrators. This initiative, announced at HPE Discover, underscores HPE's commitment to providing a comprehensive approach to AI, combining industry-leading infrastructure and services.
HPE and NVIDIA are introducing innovations designed to scale enterprise AI factory adoption. The NVIDIA AI Computing by HPE portfolio combines NVIDIA Blackwell accelerated computing, NVIDIA Spectrum-X Ethernet, and NVIDIA BlueField-3 networking technologies with HPE's servers, storage, services, and software. This integrated stack includes HPE OpsRamp Software and HPE Morpheus Enterprise Software for orchestration, streamlining AI implementation. HPE is also launching the next-generation HPE Private Cloud AI, co-engineered with NVIDIA, offering a full-stack, turnkey AI factory solution. These new offerings include HPE ProLiant Compute DL380a Gen12 servers with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, providing a universal data center platform for various enterprise AI and industrial AI use cases. Furthermore, HPE introduced the NVIDIA HGX B300 system, the HPE Compute XD690, built with NVIDIA Blackwell Ultra GPUs, expected to ship in October. With these advancements, HPE aims to remove the complexity of building a full AI tech stack, facilitating easier adoption and management of AI factories for businesses of all sizes and enabling sustainable business value. Recommended read:
References :
@www.artificialintelligence-news.com
//
References:
venturebeat.com
, www.artificialintelligence-new
,
Hugging Face has partnered with Groq to offer ultra-fast AI model inference, integrating Groq's Language Processing Unit (LPU) inference engine as a native provider on the Hugging Face platform. This collaboration aims to provide developers with access to lightning-fast processing capabilities directly within the popular model hub. Groq's chips are specifically designed for language models, offering a specialized architecture that differs from traditional GPUs by embracing the sequential nature of language tasks, resulting in reduced response times and higher throughput for AI applications.
Developers can now access high-speed inference for multiple open-weight models through Groq’s infrastructure, including Meta’s Llama 4, Meta’s Llama-3 and Qwen’s QwQ-32B. Groq is the only inference provider to enable the full 131K context window, allowing developers to build applications at scale. The integration works seamlessly with Hugging Face’s client libraries for both Python and JavaScript, though the technical details remain refreshingly simple. Even without diving into code, developers can specify Groq as their preferred provider with minimal configuration. This partnership marks Groq’s boldest attempt yet to carve out market share in the rapidly expanding AI inference market, where companies like AWS Bedrock, Google Vertex AI, and Microsoft Azure have dominated by offering convenient access to leading language models. This marks Groq's third major platform partnership in as many months. In April, Groq became the exclusive inference provider for Meta’s official Llama API, delivering speeds up to 625 tokens per second to enterprise customers. The following mo Recommended read:
References :
Savannah Martin@News - Stability AI
//
References:
News - Stability AI
, Rashi Shrivastava
,
Nvidia CEO Jensen Huang has publicly disagreed with claims made by Anthropic's chief, Dario Amodei, regarding the potential job displacement caused by artificial intelligence. Amodei suggested that AI could eliminate a significant portion of entry-level white-collar jobs, leading to a sharp increase in unemployment. Huang, however, maintains a more optimistic view, arguing that AI will ultimately create more career opportunities. He criticized Amodei's stance as overly cautious and self-serving, suggesting that Anthropic's focus on AI safety is being used to limit competition and control the narrative around AI development.
Huang emphasized the importance of open and responsible AI development, contrasting it with what he perceives as Anthropic's closed-door approach. He believes that AI technologies should be advanced safely and transparently, encouraging collaboration and innovation. Huang has underscored that fears of widespread job loss are unfounded, anticipating that AI will revolutionize industries and create entirely new roles and professions that we cannot currently imagine. Nvidia is actively working to make AI more accessible and efficient. Nvidia has collaborated with Stability AI to optimize Stable Diffusion 3.5 models using TensorRT, resulting in significantly faster performance and reduced memory requirements on NVIDIA RTX GPUs. These optimizations extend the accessibility of AI tools to a wider range of users, including creative professionals and developers, fostering further innovation and development in the field. This collaboration provides enterprise-grade image generation to users. Recommended read:
References :
Jowi Morales@tomshardware.com
//
NVIDIA is partnering with Germany and Deutsche Telekom to build Europe's first industrial AI cloud, a project hailed as one of the most ambitious tech endeavors in the continent. This initiative aims to establish Germany as a leader in AI manufacturing and innovation. NVIDIA's CEO, Jensen Huang, met with Chancellor Friedrich Merz to discuss the new partnerships that will drive breakthroughs on this AI cloud.
This "AI factory," located in Germany, will provide European industrial leaders with the computational power needed to revolutionize manufacturing processes, from design and engineering to simulation and robotics. The goal is to empower European industrial players to lead in simulation-first, AI-driven manufacturing. Deutsche Telekom's CEO, Timotheus Höttges, emphasized the urgency of seizing AI opportunities to revolutionize the industry and secure a leading position in global technology competition. The first phase of the project will involve deploying 10,000 NVIDIA Blackwell GPUs across various high-performance systems, making it Germany's largest AI deployment. This infrastructure will also feature NVIDIA networking and AI software. NEURA Robotics, a German firm specializing in cognitive robotics, plans to utilize these resources to power its Neuraverse, a network where robots can learn from each other. This partnership between NVIDIA and Germany signifies a critical step towards achieving technological sovereignty in Europe and accelerating AI development across industries. Recommended read:
References :
Niithiyn Vijeaswaran@Artificial Intelligence
//
References:
Artificial Intelligence
, AI News | VentureBeat
,
Nvidia is making significant strides in artificial intelligence with new models and strategic partnerships aimed at expanding its capabilities across various industries. The company is building the world's first industrial AI cloud in Germany, equipped with 10,000 GPUs, DGX B200 systems, and RTX Pro servers. This facility will leverage CUDA-X libraries and RTX and Omniverse-accelerated workloads to serve as a launchpad for AI development and adoption by European manufacturers. Nvidia CEO Jensen Huang believes that physical AI systems represent a $50 trillion market opportunity, emphasizing the transformative potential of AI in factories, transportation, and robotics.
Nvidia is also introducing new AI models to enhance its offerings. The Llama 3.3 Nemotron Super 49B V1 and Llama 3.1 Nemotron Nano 8B V1 are now available in Amazon Bedrock Marketplace and Amazon SageMaker JumpStart, allowing users to deploy these reasoning models for building and scaling generative AI applications on AWS. Additionally, Nvidia's Earth-2 platform features cBottle, a generative AI model that simulates global climate at kilometer-scale resolution, promising faster and more efficient climate predictions. This model reduces data storage needs significantly and enables explicit simulation of convection, improving the accuracy of extreme weather event projections. Beyond hardware and model development, Nvidia is actively forming partnerships to power AI initiatives globally. In Taiwan, Nvidia is collaborating with Foxconn to build an AI supercomputer, and it is also working with Siemens and Deutsche Telekom to establish the industrial AI cloud in Germany. Nvidia's automotive business is projected to reach $5 billion this year, with potential for further growth as autonomous vehicles become more prevalent. The company's full-stack Drive AV software is now in full production, starting with the Mercedes Benz CLA sedan, demonstrating its commitment to advancing AI-driven driving and related technologies. Recommended read:
References :
@futurumgroup.com
//
References:
futurumgroup.com
, Maginative
NVIDIA is making significant strides in the fields of robotics and climate modeling, leveraging its AI expertise and advanced platforms. At COMPUTEX 2025, NVIDIA announced the latest enhancements to its Isaac robotics platform, including Isaac GR00T N1.5 and GR00T-Dreams, designed to accelerate the development of humanoid robots. These tools focus on streamlining development through synthetic data generation and accelerated training, addressing the critical need for extensive training data. Robotics leaders such as Boston Dynamics and Foxconn have already adopted Isaac technologies, indicating the platform's growing influence in the industry.
NVIDIA's Isaac GR00T-Dreams allows developers to create task-based motion sequences from a single image input, significantly reducing the reliance on real-world data collection. The company has also released simulation frameworks, including Isaac Sim 5.0 and Isaac Lab 2.2, along with Cosmos Reason and Cosmos Predict 2, to further support high-quality training data generation. Blackwell-based RTX PRO 6000 workstations and servers from partners like Dell, HPE, and Supermicro are being introduced to unify robot development workloads from training to deployment. Olivier Blanchard, Research Director at Futurum, notes that these platform updates reinforce NVIDIA's position in defining the infrastructure for humanoid robotics. In parallel with its robotics initiatives, NVIDIA has unveiled cBottle, a generative AI model within its Earth-2 platform, which simulates global climate at kilometer-scale resolution. This model promises faster, more efficient climate predictions by simulating atmospheric conditions at a detailed 5km resolution. cBottle addresses the limitations of traditional climate models by compressing massive climate simulation datasets, reducing storage requirements by up to 3,000 times. This allows for explicit simulation of convection, driving more accurate projections of extreme weather events and opening new avenues for understanding and anticipating complex climate phenomena. Recommended read:
References :
@futurumgroup.com
//
References:
futurumgroup.com
NVIDIA is significantly advancing the field of humanoid robotics through its Isaac GR00T platform and Blackwell systems. These tools aim to accelerate the development and deployment of robots in manufacturing and other industries. Key to this advancement is NVIDIA's focus on simulation-first, AI-driven methodologies, leveraging synthetic data and integrated infrastructure to overcome the limitations of traditional robot training. This approach is particularly beneficial for European manufacturers seeking to enhance their production processes through AI.
NVIDIA's commitment to AI-powered robotics is evidenced by its substantial investment in hardware and software. The company is constructing the "world's first" industrial AI cloud in Germany, featuring 10,000 GPUs, DGX B200 systems, and RTX Pro servers. This infrastructure will support CUDA-X libraries, RTX, and Omniverse-accelerated workloads, providing a powerful platform for European manufacturers to develop and deploy AI-driven robots. NVIDIA's Isaac GR00T N1.5, an open foundation model for humanoid robot reasoning and skills, is now available, further empowering developers to create adaptable and instruction-following robots. European robotics companies are already embracing NVIDIA's technologies. Companies such as Agile Robots, Humanoid, Neura Robotics, Universal Robots, Vorwerk and Wandelbots are showcasing AI-driven robots powered by NVIDIA's platform. NVIDIA is also releasing new models and tools, including NVIDIA Halos, a safety system designed to unify hardware, AI models, software, tools, and services, to promote safety across the entire development lifecycle of AI-driven robots. By addressing both the performance and safety aspects of robotics, NVIDIA is positioning itself as a key player in the future of AI-powered automation. Recommended read:
References :
Anton Shilov@tomshardware.com
//
References:
www.tomshardware.com
, Maginative
Nvidia CEO Jensen Huang recently highlighted the significant advancements in artificial intelligence, stating that AI capabilities have increased a millionfold in the last decade. Huang attributed this rapid growth to improvements in GPU performance and system scaling. Speaking at London Tech Week, Huang emphasized the "incredible" speed of industry change and also met with U.K. Prime Minister Keir Starmer to discuss integrating AI into national economic planning through strategic investments in infrastructure, talent development, and government-industry collaborations.
Huang also introduced NVIDIA's Earth-2 platform, featuring cBottle, a generative AI model designed to simulate the global climate at kilometer-scale resolution. This innovative model promises faster and more efficient climate predictions by simulating atmospheric conditions at a detailed 5km resolution, utilizing advanced diffusion modeling to generate realistic atmospheric states based on variables like time of day, year, and sea surface temperatures. The cBottle model can compress massive climate simulation datasets, reducing storage requirements by up to 3,000 times for individual weather samples. The key advantage of cBottle lies in its ability to explicitly simulate convection, which drives thunderstorms, hurricanes, and rainfall, instead of relying on simplified equations used in traditional models. This enhances the accuracy of extreme weather event projections, which are often uncertain in coarser-scale models. Furthermore, cBottle can fill in missing or corrupted climate data, correct biases in existing models, and enhance low-resolution data through super-resolution techniques, making high-resolution climate modeling more accessible and efficient. Recommended read:
References :
@techinformed.com
//
NVIDIA CEO Jensen Huang and UK Prime Minister Keir Starmer have recently joined forces at London Tech Week to solidify the UK's position as a leading force in AI. This collaborative effort aims to bolster the nation's digital infrastructure and promote AI development across various sectors. Starmer has committed £1 billion in investment to supercharge the AI sector, emphasizing the UK's ambition to be at the forefront of AI innovation rather than simply consuming the technology. Huang highlighted the UK's rich AI community, world-class universities, and significant AI capital investment as key factors positioning it for success.
The partnership includes the establishment of a dedicated NVIDIA AI Technology Center in the UK. This center will provide hands-on training in crucial areas such as AI, data science, and accelerated computing, with a specific focus on nurturing talent in foundation model building, embodied AI, materials science, and earth systems modeling. The initiative aims to tackle the existing AI skills gap, ensuring that the UK has a workforce capable of leveraging the new infrastructure and technologies being developed. Cloud providers are also stepping up with significant GPU deployments, with Nscale planning 10,000 NVIDIA Blackwell GPUs by late 2026 and Nebius revealing plans for an AI factory with 4,000 NVIDIA Blackwell GPUs. Furthermore, the UK's financial sector is set to benefit from this AI push. The Financial Conduct Authority (FCA) is launching a 'supercharged sandbox' scheme, allowing banks and other City firms to experiment safely with NVIDIA AI products. This initiative aims to speed up innovation and boost UK growth by integrating AI into the financial sector. Potential applications include intercepting authorized push payment fraud and identifying stock market manipulation, showcasing the potential of AI to enhance customer service and data analytics within the financial industry. Recommended read:
References :
@www.artificialintelligence-news.com
//
The UK government is launching a nationwide initiative to boost AI skills among workers and schoolchildren, solidifying its position as a leader in AI development and innovation. Prime Minister Keir Starmer announced a partnership with tech giants like NVIDIA, Google, Microsoft, and Amazon to train 7.5 million workers in artificial intelligence skills. The program will provide freely available training materials to businesses over the next five years, focusing on practical applications such as using chatbots and large language models to enhance productivity. This initiative aims to equip the UK workforce with the necessary skills to thrive in an increasingly AI-driven economy.
As part of this comprehensive effort, all civil servants in England and Wales will receive practical AI training starting this autumn to enhance their work efficiency. The government aims to integrate AI into various aspects of public service, streamlining operations and improving productivity. Officials are already piloting AI tools, such as "Humphrey," named after the character from "Yes, Minister," to automate tasks and reduce the time spent on routine processes. The goal is to ensure that AI handles tasks where it can perform better, faster, and to the same high quality, freeing up civil servants for more complex and strategic work. To support this AI skills drive, the government is also focusing on bolstering the UK's AI infrastructure. NVIDIA is investing in the UK, establishing an AI Technology Center to provide hands-on training in AI, data science, and accelerated computing. Cloud providers like Nscale and Nebius are deploying thousands of NVIDIA GPUs to enhance computational capabilities for research bodies, universities, and public services. The Prime Minister has pledged to invest approximately £1 billion in AI research compute by 2030, signaling a commitment to turning Britain into an AI superpower and attracting tech investment to stimulate economic growth. Recommended read:
References :
@techinformed.com
//
References:
techinformed.com
, www.artificialintelligence-new
NVIDIA is significantly investing in the UK to bolster its AI infrastructure and address the growing skills gap in the field. CEO Jensen Huang has pledged support for building the necessary infrastructure to power AI advancements across the nation. This commitment involves establishing an AI Technology Center in the UK, providing hands-on training in key areas like AI, data science, and accelerated computing. The focus will be on supporting foundation model builders, embodied AI, materials science, and earth systems modeling, ensuring the UK has the talent pool to capitalize on AI opportunities.
The financial sector is set to benefit from this partnership through a new AI-powered sandbox initiative by the Financial Conduct Authority (FCA). This sandbox will enable banks and other City firms to safely experiment with NVIDIA's AI technology under the regulator's supervision. NayaOne will provide the infrastructure, with NVIDIA supplying the technological backbone. This initiative aims to "speed up innovation" and fulfill government objectives to boost UK growth. The FCA's support of AI comes as they encourage more risk-taking across the City to help spur growth and competitiveness. This collaboration is intended to help firms harness AI to benefit the UK markets and consumers, while also supporting broader economic growth. Sumant Kumar, CTO for Banking & Financial Markets at NTT DATA UK&I, highlighted the potential of this "supercharged sandbox" to help banks achieve viable AI prototypes. Cloud providers like Nscale and Nebius are also contributing to the UK's AI capabilities by deploying thousands of NVIDIA Blackwell GPUs, offering much-needed computational power for research, universities, and public services, including the NHS. Recommended read:
References :
@www.linkedin.com
//
Nvidia has once again asserted its dominance in the AI training landscape with the release of the MLPerf Training v5.0 results. The company's Blackwell GB200 accelerators achieved record time-to-train scores, showcasing a significant leap in performance. This latest benchmark suite included submissions from various companies, but Nvidia's platform stood out, particularly in the most demanding large language model (LLM)-focused test involving Llama 3.1 405B pretraining. These results underscore the rapid growth and evolution of the AI field, with the Blackwell architecture demonstrably meeting the heightened performance demands of next-generation AI applications.
The MLPerf Training v5.0 results highlight Nvidia's commitment to versatility, as it was the only platform to submit results across every benchmark. The at-scale submissions leveraged two AI supercomputers powered by the Blackwell platform: Tyche, built using GB200 NVL72 rack-scale systems, and Nyx, based on DGX B200 systems. Additionally, Nvidia collaborated with CoreWeave and IBM, utilizing a cluster of 2,496 Blackwell GPUs and 1,248 Grace CPUs. The new Llama 3.1 405B pretraining benchmark witnessed Blackwell delivering 2.2x greater performance compared to the previous generation architecture at the same scale. The performance gains are attributed to advancements in the Blackwell architecture, encompassing high-density liquid-cooled racks, 13.4TB of coherent memory per rack, and fifth-generation NVLink and NVLink Switch interconnect technologies for scale-up, as well as Quantum-2 InfiniBand networking for scale-out. These technological innovations, combined with the NVIDIA NeMo Framework software stack, are raising the bar for next-generation multimodal LLM training. While AMD did showcase generational performance gains, Nvidia's GPUs reigned supreme, outpacing AMD's MI325X in MLPerf benchmarks, solidifying Nvidia's position as a leader in AI training capabilities. Recommended read:
References :
@www.linkedin.com
//
Nvidia's Blackwell GPUs have achieved top rankings in the latest MLPerf Training v5.0 benchmarks, demonstrating breakthrough performance across various AI workloads. The NVIDIA AI platform delivered the highest performance at scale on every benchmark, including the most challenging large language model (LLM) test, Llama 3.1 405B pretraining. Nvidia was the only vendor to submit results on all MLPerf Training v5.0 benchmarks, highlighting the versatility of the NVIDIA platform across a wide array of AI workloads, including LLMs, recommendation systems, multimodal LLMs, object detection, and graph neural networks.
The at-scale submissions used two AI supercomputers powered by the NVIDIA Blackwell platform: Tyche, built using NVIDIA GB200 NVL72 rack-scale systems, and Nyx, based on NVIDIA DGX B200 systems. Nvidia collaborated with CoreWeave and IBM to submit GB200 NVL72 results using a total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs. The GB200 NVL72 systems achieved 90% scaling efficiency up to 2,496 GPUs, improving time-to-convergence by up to 2.6x compared to Hopper-generation H100. The new MLPerf Training v5.0 benchmark suite introduces a pretraining benchmark based on the Llama 3.1 405B generative AI system, the largest model to be introduced in the training benchmark suite. On this benchmark, Blackwell delivered 2.2x greater performance compared with the previous-generation architecture at the same scale. Furthermore, on the Llama 2 70B LoRA fine-tuning benchmark, NVIDIA DGX B200 systems, powered by eight Blackwell GPUs, delivered 2.5x more performance compared with a submission using the same number of GPUs in the prior round. These performance gains highlight advancements in the Blackwell architecture and software stack, including high-density liquid-cooled racks, fifth-generation NVLink and NVLink Switch interconnect technologies, and NVIDIA Quantum-2 InfiniBand networking. Recommended read:
References :
@www.marktechpost.com
//
References:
thenewstack.io
, MarkTechPost
,
Nvidia is reportedly developing a new AI chip, the B30, specifically tailored for the Chinese market to comply with U.S. export controls. This Blackwell-based alternative aims to offer multi-GPU scaling capabilities, potentially through NVLink or ConnectX-8 SuperNICs. While earlier reports suggested different names like RTX Pro 6000D or B40, B30 could be one variant within the BXX family. The design incorporates GB20X silicon, which also powers consumer-grade RTX 50 GPUs, but may exclude NVLink support seen in prior generations due to its absence in consumer-grade GPU dies.
Nvidia has also introduced Fast-dLLM, a training-free framework designed to enhance the inference speed of diffusion large language models (LLMs). Diffusion models, explored as an alternative to autoregressive models, promise faster decoding through simultaneous multi-token generation, enabled by bidirectional attention mechanisms. However, their practical application is limited by inefficient inference, largely due to the lack of key-value (KV) caching, which accelerates performance by reusing previously computed attention states. Fast-dLLM aims to address this by bringing KV caching and parallel decoding capabilities to diffusion LLMs, potentially surpassing autoregressive systems. During his keynote speech at GTC 2025, Nvidia CEO Jensen Huang emphasized the accelerating pace of artificial intelligence development and the critical need for optimized AI infrastructure. He stated Nvidia would shift to the Blackwell architecture for future China-bound chips, discontinuing Hopper-based alternatives following the H20 ban. Huang's focus on AI infrastructure highlights the industry's recognition of the importance of robust and scalable systems to support the growing demands of AI applications. Recommended read:
References :
@futurumgroup.com
//
References:
blogs.nvidia.com
, futurumgroup.com
,
NVIDIA reported a significant jump in Q1 FY 2026 revenue, increasing by 69% year-over-year to $44.1 billion. This growth was fueled by strong demand in both the data center and gaming segments, driven by the anticipation and initial deployments of their Blackwell architecture. Despite facing export restrictions in China related to H20, NVIDIA’s performance reflects sustained global demand for AI computing. The company is actively scaling Blackwell deployments while navigating these export-related challenges, supported by traction in sovereign AI initiatives which helped offset the headwinds in China.
NVIDIA's CEO, Jensen Huang, highlighted the full-scale production of the Blackwell NVL72 AI supercomputer, describing it as a "thinking machine" for reasoning. He emphasized the incredibly strong global demand for NVIDIA's AI infrastructure, noting a tenfold surge in AI inference token generation within a year. Huang anticipates that as AI agents become more mainstream, the demand for AI computing will accelerate further. The company's data center revenue reached $39.1 billion, a 73% increase year-over-year, showcasing the impact of Blackwell ramp up and the adoption of accelerated AI inference. Beyond infrastructure, NVIDIA is also expanding its reach through strategic partnerships. NVIDIA and MediaTek are collaborating to develop an ARM-based mobile APU specifically designed for gaming laptops. This collaboration aims to combine NVIDIA’s graphics expertise with MediaTek’s compute capabilities to create a product that could rival AMD’s Strix Halo. The planned APU will focus on power efficiency and thermal performance, which are crucial for modern gaming laptops with thinner chassis. Recommended read:
References :
@techhq.com
//
References:
TechHQ
, futurumgroup.com
Dell Technologies and NVIDIA are collaborating to construct the "Doudna" supercomputer for the U.S. Department of Energy (DOE). Named after Nobel laureate Jennifer Doudna, the system will be housed at the Lawrence Berkeley National Laboratory's National Energy Research Scientific Computing Center (NERSC) and is slated for deployment in 2026. This supercomputer aims to revolutionize scientific research by merging artificial intelligence (AI) with simulation capabilities, empowering over 11,000 researchers across various disciplines, including fusion energy, astronomy, and life sciences. The project represents a significant federal investment in high-performance computing (HPC) infrastructure, designed to maintain U.S. leadership in AI and scientific discovery.
The Doudna supercomputer, also known as NERSC-10, promises a tenfold increase in scientific output compared to its predecessor, Perlmutter, while only consuming two to three times the power. This translates to a three-to-five-fold improvement in performance per watt, achieved through advanced chip design and system-level efficiencies. The system integrates high-performance CPUs with coherent GPUs, enabling direct data access and sharing across processors, addressing traditional bottlenecks in scientific computing workflows. Doudna will also be connected to DOE experimental and observational facilities through the Energy Sciences Network (ESnet), facilitating seamless data streaming and near real-time analysis. According to NVIDIA CEO Jensen Huang, Doudna is designed to accelerate scientific workflows and act as a "time machine for science," compressing years of discovery into days. Energy Secretary Chris Wright sees it as essential infrastructure for maintaining American technological leadership in AI and quantum computing. The supercomputer emphasizes coherent memory access between CPUs and GPUs, enabling data sharing in heterogeneous processors, which is a requirement for modern AI-accelerated scientific workflows. The Nvidia Vera Rubin supercomputer architecture introduces hardware-level optimizations designed specifically for the convergence of simulation, machine learning, and quantum algorithm development. Recommended read:
References :
Dashveenjit Kaur@TechHQ
//
References:
futurumgroup.com
, insideAI News
Nvidia is significantly expanding its "AI Factory" offerings, collaborating with Dell Technologies to power the next wave of AI infrastructure. This includes the development of the next-generation NERSC-10 supercomputer, dubbed Doudna, based on the Nvidia Vera Rubin supercomputer architecture. This new system is designed to accelerate scientific discovery across fields such as fusion energy, astronomy, and life sciences, benefiting around 11,000 researchers. The Doudna supercomputer will be built by Dell and aims to deliver a tenfold increase in scientific output compared to its predecessor, Perlmutter, while significantly improving power efficiency.
The Doudna supercomputer represents a major investment in scientific computing infrastructure and is named after CRISPR pioneer Jennifer Doudna. Unlike traditional supercomputers, Doudna's architecture prioritizes coherent memory access between CPUs and GPUs, enabling efficient data sharing for modern AI-accelerated scientific workflows. This innovative design integrates simulation, machine learning, and quantum algorithm development, areas increasingly defining cutting-edge research. The supercomputer is expected to be deployed in 2026 and is crucial for maintaining American technological leadership in the face of escalating global competition in AI and quantum computing. In addition to expanding AI infrastructure, Nvidia is also addressing the China AI market by working with AMD to develop export rules-compliant chips. This move comes as the U.S. government has restricted the export of certain high-performance GPUs to China. Nvidia CEO Jensen Huang has emphasized the strategic importance of the Chinese market, noting that a significant portion of AI developers reside there. By creating chips that adhere to U.S. trade restrictions, Nvidia aims to continue serving the Chinese market while ensuring that AI development continues on Nvidia's CUDA platform. Recommended read:
References :
@blogs.nvidia.com
//
References:
NVIDIA Newsroom
, hothardware.com
,
NVIDIA has officially launched a native GeForce NOW app for the Steam Deck, significantly enhancing the portable gaming experience. This new app provides Steam Deck users with access to high-quality, GeForce RTX-powered gameplay, allowing them to stream their favorite games on the go. The native application, promised earlier this year, unlocks the full potential of Valve's handheld device for cloud gaming. This integration transforms the Steam Deck into a more formidable gaming handheld, making it more competitive against other handheld devices in the market.
The GeForce NOW app enables Steam Deck gamers to access over 2,200 supported games from various platforms, including Steam, Epic Games Store, Ubisoft, Battle.net, and Xbox, with over 180 supported PC Game Pass titles. Gamers can now enjoy graphics-intensive AAA titles like Clair Obscur: Expedition 33, Elder Scrolls IV: Oblivion Remastered, Monster Hunter Wilds, and Microsoft Flight Simulator 2024 at max settings without concerns about hardware limitations or battery drain. The app also allows gamers to stream titles on the Steam Deck at up to 4K 60 frames per second when connected to a TV, complete with HDR10, NVIDIA DLSS 4, and Reflex technologies on supported games. In addition to expanding the game library and enhancing visual quality, the GeForce NOW app offers significant performance and battery life improvements for Steam Deck users. NVIDIA claims the app can extend battery life by up to 50% compared to native gameplay, as the games run on NVIDIA’s cloud servers, reducing the strain on the Steam Deck’s hardware. To celebrate the launch, NVIDIA is giving away Steam Deck OLEDs, Steam Deck Docks, and GeForce NOW Ultimate memberships through social media contests. Also launched this week is the new game Tokyo Xtreme Racing, a new racing game from Japanese game developer Genki. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |