Jowi Morales@tomshardware.com
//
NVIDIA is partnering with Germany and Deutsche Telekom to build Europe's first industrial AI cloud, a project hailed as one of the most ambitious tech endeavors in the continent. This initiative aims to establish Germany as a leader in AI manufacturing and innovation. NVIDIA's CEO, Jensen Huang, met with Chancellor Friedrich Merz to discuss the new partnerships that will drive breakthroughs on this AI cloud.
This "AI factory," located in Germany, will provide European industrial leaders with the computational power needed to revolutionize manufacturing processes, from design and engineering to simulation and robotics. The goal is to empower European industrial players to lead in simulation-first, AI-driven manufacturing. Deutsche Telekom's CEO, Timotheus Höttges, emphasized the urgency of seizing AI opportunities to revolutionize the industry and secure a leading position in global technology competition. The first phase of the project will involve deploying 10,000 NVIDIA Blackwell GPUs across various high-performance systems, making it Germany's largest AI deployment. This infrastructure will also feature NVIDIA networking and AI software. NEURA Robotics, a German firm specializing in cognitive robotics, plans to utilize these resources to power its Neuraverse, a network where robots can learn from each other. This partnership between NVIDIA and Germany signifies a critical step towards achieving technological sovereignty in Europe and accelerating AI development across industries. Recommended read:
References :
@techinformed.com
//
NVIDIA CEO Jensen Huang and UK Prime Minister Keir Starmer have recently joined forces at London Tech Week to solidify the UK's position as a leading force in AI. This collaborative effort aims to bolster the nation's digital infrastructure and promote AI development across various sectors. Starmer has committed £1 billion in investment to supercharge the AI sector, emphasizing the UK's ambition to be at the forefront of AI innovation rather than simply consuming the technology. Huang highlighted the UK's rich AI community, world-class universities, and significant AI capital investment as key factors positioning it for success.
The partnership includes the establishment of a dedicated NVIDIA AI Technology Center in the UK. This center will provide hands-on training in crucial areas such as AI, data science, and accelerated computing, with a specific focus on nurturing talent in foundation model building, embodied AI, materials science, and earth systems modeling. The initiative aims to tackle the existing AI skills gap, ensuring that the UK has a workforce capable of leveraging the new infrastructure and technologies being developed. Cloud providers are also stepping up with significant GPU deployments, with Nscale planning 10,000 NVIDIA Blackwell GPUs by late 2026 and Nebius revealing plans for an AI factory with 4,000 NVIDIA Blackwell GPUs. Furthermore, the UK's financial sector is set to benefit from this AI push. The Financial Conduct Authority (FCA) is launching a 'supercharged sandbox' scheme, allowing banks and other City firms to experiment safely with NVIDIA AI products. This initiative aims to speed up innovation and boost UK growth by integrating AI into the financial sector. Potential applications include intercepting authorized push payment fraud and identifying stock market manipulation, showcasing the potential of AI to enhance customer service and data analytics within the financial industry. Recommended read:
References :
@www.marktechpost.com
//
References:
thenewstack.io
, MarkTechPost
,
Nvidia is reportedly developing a new AI chip, the B30, specifically tailored for the Chinese market to comply with U.S. export controls. This Blackwell-based alternative aims to offer multi-GPU scaling capabilities, potentially through NVLink or ConnectX-8 SuperNICs. While earlier reports suggested different names like RTX Pro 6000D or B40, B30 could be one variant within the BXX family. The design incorporates GB20X silicon, which also powers consumer-grade RTX 50 GPUs, but may exclude NVLink support seen in prior generations due to its absence in consumer-grade GPU dies.
Nvidia has also introduced Fast-dLLM, a training-free framework designed to enhance the inference speed of diffusion large language models (LLMs). Diffusion models, explored as an alternative to autoregressive models, promise faster decoding through simultaneous multi-token generation, enabled by bidirectional attention mechanisms. However, their practical application is limited by inefficient inference, largely due to the lack of key-value (KV) caching, which accelerates performance by reusing previously computed attention states. Fast-dLLM aims to address this by bringing KV caching and parallel decoding capabilities to diffusion LLMs, potentially surpassing autoregressive systems. During his keynote speech at GTC 2025, Nvidia CEO Jensen Huang emphasized the accelerating pace of artificial intelligence development and the critical need for optimized AI infrastructure. He stated Nvidia would shift to the Blackwell architecture for future China-bound chips, discontinuing Hopper-based alternatives following the H20 ban. Huang's focus on AI infrastructure highlights the industry's recognition of the importance of robust and scalable systems to support the growing demands of AI applications. Recommended read:
References :
@futurumgroup.com
//
References:
blogs.nvidia.com
, futurumgroup.com
,
NVIDIA reported a significant jump in Q1 FY 2026 revenue, increasing by 69% year-over-year to $44.1 billion. This growth was fueled by strong demand in both the data center and gaming segments, driven by the anticipation and initial deployments of their Blackwell architecture. Despite facing export restrictions in China related to H20, NVIDIA’s performance reflects sustained global demand for AI computing. The company is actively scaling Blackwell deployments while navigating these export-related challenges, supported by traction in sovereign AI initiatives which helped offset the headwinds in China.
NVIDIA's CEO, Jensen Huang, highlighted the full-scale production of the Blackwell NVL72 AI supercomputer, describing it as a "thinking machine" for reasoning. He emphasized the incredibly strong global demand for NVIDIA's AI infrastructure, noting a tenfold surge in AI inference token generation within a year. Huang anticipates that as AI agents become more mainstream, the demand for AI computing will accelerate further. The company's data center revenue reached $39.1 billion, a 73% increase year-over-year, showcasing the impact of Blackwell ramp up and the adoption of accelerated AI inference. Beyond infrastructure, NVIDIA is also expanding its reach through strategic partnerships. NVIDIA and MediaTek are collaborating to develop an ARM-based mobile APU specifically designed for gaming laptops. This collaboration aims to combine NVIDIA’s graphics expertise with MediaTek’s compute capabilities to create a product that could rival AMD’s Strix Halo. The planned APU will focus on power efficiency and thermal performance, which are crucial for modern gaming laptops with thinner chassis. Recommended read:
References :
Heng Chi@AI Accelerator Institute
//
AI is revolutionizing data management and analytics across various platforms. Amazon Web Services (AWS) is facilitating the development of high-performance data pipelines for AI and Natural Language Processing (NLP) applications, utilizing services like Amazon S3, AWS Lambda, AWS Glue, and Amazon SageMaker. These pipelines are essential for ingesting, processing, and providing output for training, inference, and decision-making at a large scale, leveraging AWS's scalability, flexibility, and cost-efficiency. AWS's auto-scaling options, seamless integration with ML and NLP workflows, and pay-as-you-go pricing model make it a preferred choice for businesses of all sizes.
Microsoft is simplifying data visualization with its new AI-powered tool, Data Formulator. This open-source application, developed by Microsoft Research, uses Large Language Models (LLMs) to transform data into interesting charts and graphs, even for users without extensive data manipulation and visualization knowledge. Data Formulator differentiates itself with its intuitive user interface and hybrid interactions, bridging the gap between visualization ideas and their actual creation. By supplementing natural language inputs with drag-and-drop interactions, it allows users to express visualization intent, with the AI handling the complex transformations in the background. Yandex has released Yambda, the world's largest publicly available event dataset, to accelerate recommender systems research and development. This dataset contains nearly 5 billion anonymized user interaction events from Yandex Music, offering a valuable resource for bridging the gap between academic research and industry-scale applications. Yambda addresses the scarcity of large, openly accessible datasets in the field of recommender systems, which has traditionally lagged behind other AI domains due to the sensitive nature and commercial value of behavioral data. Additionally, Dremio is collaborating with Confluent’s TableFlow to provide real-time analytics on Apache Iceberg data, enabling users to stream data from Kafka into queryable tables without manual pipelines, accelerating insights and reducing ETL complexity. Recommended read:
References :
@insidehpc.com
//
References:
insideAI News
, www.artificialintelligence-new
,
MiTAC Computing Technology and AMD are strengthening their partnership to deliver cutting-edge solutions for AI, HPC, cloud-native, and enterprise applications. MiTAC will showcase this collaboration at COMPUTEX 2025, highlighting their shared vision for scalable and energy-efficient technologies. This partnership, which began in 2002, leverages AMD EPYC processors and Instinct GPUs to meet the evolving demands of modern data centers. Rick Hwang, President of MiTAC Computing Technology, emphasized their excitement in advancing server solutions powered by AMD's latest processors and GPUs, stating that it's key to unlocking new capabilities for their global customer base in AI and HPC infrastructure.
Specifically, MiTAC and AMD are developing next-generation server platforms. One notable product is an 8U server equipped with dual AMD EPYC 9005 Series processors and support for up to 8 AMD Instinct MI325X GPUs, offering exceptional compute density and up to 6TB of DDR5-6400 memory, ideal for large-scale AI model training and scientific applications. Additionally, they are offering a 2U dual-socket GPU server, that supports up to four dual-slot GPUs with 24 DDR5-6400 RDIMM slots and tool-less NVMe storage carriers, it offers high-speed throughput and flexibility for deep learning and HPC environments. Meanwhile, Nvidia is preparing to compete with Huawei in the Chinese AI chip market by releasing a budget-friendly AI chip. This strategy is driven by the need to maintain relevance in the face of growing domestic competition and navigate export restrictions. The new chip, priced between $6,500 and $8,000, represents a significant cost reduction compared to the previously banned H20 model. This reduction involves trade-offs, such as using Nvidia's RTX Pro 6000D foundation with standard GDDR7 memory and foregoing Taiwan Semiconductor's advanced CoWoS packaging technology. Recommended read:
References :
Stephen Warwick@tomshardware.com
//
References:
www.artificialintelligence-new
, www.tomshardware.com
OpenAI is significantly expanding its AI infrastructure, with the launch of Stargate UAE marking the first international deployment of its Stargate AI platform. This expansion begins with a 1GW cluster in Abu Dhabi and is the first partnership under the OpenAI for Countries initiative, aimed at helping governments build sovereign AI capabilities. OpenAI says that coordination with the U.S. government was vital in making the expansion possible, highlighting the importance of democratic values, open markets, and trusted partnerships in this endeavor. The partnership includes reciprocal UAE investment into the U.S. Stargate infrastructure.
This ambitious project also promises new opportunities for developers. The OpenAI Responses API is now the first truly agentic API, which allows developers to combine MCP servers, code interpreter, reasoning, web search, and RAG - all within a single API call. This unified approach is set to enable the creation of a new generation of AI agents, streamlining the development process and expanding the capabilities of AI applications. Recent details have emerged about Jony Ive and Sam Altman's collaboration on an AI device, codenamed "io," which OpenAI has acquired for $6.5 billion. The device is envisioned as a "central facet of using OpenAI," with Altman suggesting that subscribers to ChatGPT could receive new computers directly from the company. The aim is to create an AI "companion" that is entirely aware of a user’s surroundings and life, potentially evolving into a family of devices. Recommended read:
References :
@blogs.nvidia.com
//
NVIDIA is significantly expanding its presence in the AI ecosystem through strategic partnerships and the introduction of innovative technologies. At Computex 2025, CEO Jensen Huang unveiled NVLink Fusion, a groundbreaking program that opens NVIDIA's high-speed NVLink interconnect technology to non-NVIDIA CPUs and accelerators. This move is poised to solidify NVIDIA's role as a central component in AI infrastructure, even in systems utilizing silicon from other vendors, including MediaTek, Marvell, Fujitsu, and Qualcomm. This initiative represents a major shift from NVIDIA's previously exclusive use of NVLink and is intended to enable the creation of semi-custom AI infrastructures tailored to specific needs.
This strategy ensures that while customers may incorporate rival chips, the underlying AI ecosystem remains firmly rooted in NVIDIA's technologies, including its GPUs, interconnects, and software stack. NVIDIA is also teaming up with Foxconn to construct an AI supercomputer in Taiwan, further demonstrating its commitment to advancing AI capabilities in the region. The collaboration will see Foxconn subsidiary, Big Innovation Company, delivering the infrastructure for 10,000 NVIDIA Blackwell GPUs. This substantial investment aims to empower Taiwanese organizations by providing the necessary AI cloud computing resources to facilitate the adoption of AI technologies across both private and public sectors. In addition to hardware advancements, NVIDIA is also investing in quantum computing research. Taiwan's National Center for High-Performance Computing (NCHC) is deploying a new NVIDIA-powered AI supercomputer designed to support climate science, quantum research, and the development of large language models. Built by ASUS, this supercomputer will feature NVIDIA HGX H200 systems with over 1,700 GPUs, along with other advanced NVIDIA technologies. This initiative aligns with NVIDIA's broader strategy to drive breakthroughs in sovereign AI, quantum computing, and advanced scientific computation, positioning Taiwan as a key hub for AI development and technological autonomy. Recommended read:
References :
Joe DeLaere@NVIDIA Technical Blog
//
NVIDIA has announced the opening of its NVLink technology to rival companies, a move revealed by CEO Jensen Huang at Computex 2025. The new program, called NVLink Fusion, allows companies making custom CPUs and accelerators to license the NVLink port designs. This opens the door for non-NVIDIA chips to integrate with NVIDIA's AI infrastructure, fostering a more flexible AI hardware ecosystem. MediaTek, Marvell, Fujitsu, and Qualcomm are among the early partners signing on to integrate their chips with NVIDIA's GPUs via NVLink Fusion.
NVIDIA's decision to extend NVLink support is a strategic play to remain central to the AI landscape. By enabling companies to combine their custom silicon with NVIDIA's technology, NVIDIA ensures it remains essential to their AI strategies and potentially captures additional revenue streams. NVLink Fusion allows for semi-custom AI infrastructure where other processors are involved, but the underlying connective tissue belongs to NVIDIA. The high-speed interconnect offers significantly higher bandwidth compared to PCIe 5.0, offering advantages for CPU-to-GPU communications. This expansion doesn't mean NVIDIA is entirely opening the interconnect standard. Connecting an Intel CPU to an AMD GPU directly using NVLink Fusion remains impossible. NVIDIA is essentially allowing semi-custom accelerator designs to take advantage of the high-speed interconnect even if the accelerator isn't designed by NVIDIA. As part of the announcement, NVIDIA also unveiled its next-generation Grace Blackwell systems and a new AI platform called DGX Cloud Lepton, further solidifying its position in the AI compute market. Recommended read:
References :
Joe DeLaere@NVIDIA Technical Blog
//
NVIDIA has unveiled NVLink Fusion, a technology that expands the capabilities of its high-speed NVLink interconnect to custom CPUs and ASICs. This move allows customers to integrate non-NVIDIA CPUs or accelerators with NVIDIA's GPUs within their rack-scale setups, fostering the creation of heterogeneous computing environments tailored for diverse AI workloads. This technology opens up the possibility of designing semi-custom AI infrastructure with NVIDIA's NVLink ecosystem, allowing hyperscalers to leverage the innovations in NVLink, NVIDIA NVLink-C2C, NVIDIA Grace CPU, NVIDIA GPUs, NVIDIA Co-Packaged Optics networking, rack scale architecture, and NVIDIA Mission Control software.
NVLink Fusion enables users to deliver top performance scaling with semi-custom ASICS or CPUs. As hyperscalers are already deploying full NVIDIA rack solutions, this expansion caters to the increasing demand for specialized AI factories, where diverse accelerators work together at rack-scale with maximal bandwidth and minimal latency to support the largest number of users in the most power-efficient way. The advantage of using NVLink for CPU-to-GPU communications is that it offers 14x higher bandwidth compared to PCIe 5.0 (128 GB/s). The technology will be offered in two configurations. The first will be for connecting custom CPUs to Nvidia GPUs. NVIDIA CEO Jensen Huang emphasized that AI is becoming a fundamental infrastructure, akin to the internet and electricity. He envisions an AI infrastructure industry worth trillions of dollars, powered by AI factories that produce valuable tokens. NVIDIA's approach involves expanding its ecosystem through partnerships and platforms like CUDA-X, which is used across a range of applications. NVLink Fusion is a crucial part of this vision, enabling the construction of semi-custom AI systems and solidifying NVIDIA's role at the center of AI development. Recommended read:
References :
@blogs.nvidia.com
//
NVIDIA's CEO, Jensen Huang, has presented a bold vision for the future of technology, forecasting that the artificial intelligence infrastructure industry will soon be worth trillions of dollars. Huang emphasized AI's transformative potential across all sectors globally during his Computex 2025 keynote in Taipei. He envisions AI becoming as essential as electricity and the internet, necessitating "AI factories" to produce valuable tokens by applying energy. These factories are not simply data centers but sophisticated environments that will drive innovation and growth.
NVIDIA is actively working to solidify its position as a leader in this burgeoning AI landscape. A key strategy involves expanding its research and development footprint, with plans to establish a new R&D center in Shanghai. This initiative, proposed during a meeting with Shanghai Mayor Gong Zheng, includes leasing additional office space to accommodate current staff and future expansion. The Shanghai center will focus on tailoring AI solutions for Chinese clients and contributing to global R&D efforts in areas such as chip design verification, product optimization, and autonomous driving technologies, with the Shanghai government expressing initial support for the project. Furthermore, NVIDIA is collaborating with Foxconn and the Taiwan government to construct an AI factory supercomputer, equipped with 10,000 NVIDIA Blackwell GPUs. This AI factory will provide state-of-the-art infrastructure to researchers, startups, and industries, significantly expanding AI computing availability and fueling innovation within Taiwan's technology ecosystem. Huang highlighted the importance of Taiwan in the global technology ecosystem, noting that NVIDIA is helping build AI not only for the world but also for Taiwan, emphasizing the strategic partnerships and investments crucial for realizing the AI-powered future. Recommended read:
References :
Pomi Lee@NVIDIA Technical Blog
//
References:
NVIDIA Technical Blog
, The Register - Software
,
NVIDIA CEO Jensen Huang unveiled an ambitious vision for the future of AI at COMPUTEX 2025, declaring AI as the next major technology poised to transform every industry and country. He emphasized the need for "AI factories," describing them as specialized data centers that produce valuable "tokens" by applying energy. Huang highlighted NVIDIA's CUDA-X platform and its partnerships, showcasing how these are driving advancements in areas such as 6G development and quantum supercomputing. He stressed the importance of Taiwan in the global technology ecosystem.
NVIDIA is expanding its AI ecosystem by opening up its high-speed NVLink interconnect technology to custom CPUs and ASICs via NVLink Fusion. This move allows for greater integration of custom compute solutions into rack-scale architectures. The NVLink fabric, known for its high bandwidth capabilities, facilitates seamless communication between GPUs and CPUs, offering a significant advantage over PCIe 5.0. Nvidia is allowing semi-custom accelerator designs to take advantage of the high-speed interconnect - even for non-Nvidia-designed accelerators. NVIDIA and Foxconn are partnering with the Taiwan government to construct an AI factory supercomputer, equipped with 10,000 NVIDIA Blackwell GPUs, to support local researchers and enterprises. This supercomputer, facilitated by Foxconn's Big Innovation Company, will provide AI cloud computing resources to the Taiwan technology ecosystem. This collaboration aims to accelerate AI development and adoption across various sectors, reinforcing Taiwan's position as a key player in the global AI landscape. Recommended read:
References :
@Dataconomy
//
Databricks has announced its acquisition of Neon, an open-source database startup specializing in serverless Postgres, in a deal reportedly valued at $1 billion. This strategic move is aimed at enhancing Databricks' AI infrastructure, specifically addressing the database bottleneck that often hampers the performance of AI agents. Neon's technology allows for the rapid creation and deployment of database instances, spinning up new databases in milliseconds, which is critical for the speed and scalability required by AI-driven applications. The integration of Neon's serverless Postgres architecture will enable Databricks to provide a more streamlined and efficient environment for building and running AI agents.
Databricks plans to incorporate Neon's scalable Postgres offering into its existing big data platform, eliminating the need to scale separate server and storage components in tandem when responding to AI workload spikes. This resolves a common issue in modern cloud architectures where users are forced to over-provision either compute or storage to meet the demands of the other. With Neon's serverless architecture, Databricks aims to provide instant provisioning, separation of compute and storage, and API-first management, enabling a more flexible and cost-effective solution for managing AI workloads. According to Databricks, Neon reports that 80% of its database instances are provisioned by software rather than humans. The acquisition of Neon is expected to give Databricks a competitive edge, particularly against competitors like Snowflake. While Snowflake currently lacks similar AI-driven database provisioning capabilities, Databricks' integration of Neon's technology positions it as a leader in the next generation of AI application building. The combination of Databricks' existing data intelligence platform with Neon's serverless Postgres database will allow for the programmatic provisioning of databases in response to the needs of AI agents, overcoming the limitations of traditional, manually provisioned databases. Recommended read:
References :
Evan Ackerman@IEEE Spectrum
//
Amazon has unveiled Vulcan, an AI-powered robot with a sense of touch, designed for use in its fulfillment centers. This groundbreaking robot represents a "fundamental leap forward in robotics," according to Amazon's director of applied science, Aaron Parness. Vulcan is equipped with sensors that allow it to "feel" the objects it is handling, enabling capabilities previously unattainable for Amazon robots. This sense of touch allows Vulcan to manipulate objects with greater dexterity and avoid damaging them or other items nearby.
Vulcan operates using "end of arm tooling" that includes force feedback sensors. These sensors enable the robot to understand how hard it is pushing or holding an object, ensuring it remains below the damage threshold. Amazon says that Vulcan can easily manipulate objects to make room for whatever it’s stowing, because it knows when it makes contact and how much force it’s applying. Vulcan helps to bridge the gap between humans and robots, bringing greater dexterity to the devices. The introduction of Vulcan addresses a significant challenge in Amazon's fulfillment centers, where the company handles a vast number of stock-keeping units (SKUs). While robots already play a crucial role in completing 75% of Amazon orders, Vulcan fills the ability gap of previous generations of robots. According to Amazon, one business per second is adopting AI, and Vulcan demonstrates the potential for AI and robotics to revolutionize warehouse operations. Amazon did not specify how many jobs the Vulcan model may create or displace. Recommended read:
References :
@blogs.microsoft.com
//
References:
blogs.microsoft.com
, www.microsoft.com
Microsoft is aggressively pursuing an AI-first strategy, aiming to transform business operations for its customers. A key component of this initiative is the development and deployment of agentic AI solutions. According to Microsoft CEO Nadella, a significant portion of Microsoft's code, specifically 20% to 30%, is now generated by AI, showcasing the company's deep integration of AI into its core development processes. This AI-driven approach promises to accelerate innovation and enable businesses to achieve more through autonomous capabilities.
Microsoft has officially launched Recall AI for Windows 11, an AI-powered search feature that captures periodic screenshots of user activity. This feature is available on Copilot+ PCs through the April 2025 non-security preview update. Recall aims to provide AI-driven memory search. Addressing earlier privacy concerns, Microsoft has ensured that Recall is disabled by default, requires opt-in, and encrypts all data locally. Access to this data requires Windows Hello authentication, and users can delete snapshots or block specific apps and websites from being recorded. To further solidify its commitment to AI, Microsoft is expanding its cloud and AI infrastructure in Europe as part of five digital commitments. The company is also focused on helping organizations modernize their technology stacks to leverage AI effectively. According to a 2024 Forrester study, continuous modernization, including the incorporation of generative AI, is critical for driving competitive advantage. By setting a strong cloud foundation and embracing continuous migration and modernization, businesses can unlock the full potential of AI and remain competitive in a rapidly evolving technological landscape. Recommended read:
References :
editors@tomshardware.com (Hassam@tomshardware.com
//
Microsoft CEO Satya Nadella has revealed that Artificial Intelligence is playing an increasingly significant role in the company's software development. Speaking at Meta's LlamaCon conference, Nadella stated that AI now writes between 20% and 30% of the code in Microsoft's repositories and projects. This underscores the growing influence of AI in revolutionizing software creation, especially for repetitive and data-heavy tasks, leading to efficiency gains. Nadella mentioned that AI is showing more promise in generating Python code compared to C++, due to Python's simpler syntax and better memory management.
Microsoft's embrace of AI in coding aligns with similar trends observed at other tech giants like Google, where AI is reported to generate over 30% of new code. The use of AI in code generation also brings forth concerns about job displacement for new programmers. Despite these anxieties, industry experts highlight the importance of software developers adapting to and leveraging AI tools, rather than ignoring them. Nadella emphasized that while AI can produce code, senior developer oversight remains critical to ensure the stability and reliability of the production environment. Beyond its internal use, Microsoft is also making strategic moves to expand its cloud and AI infrastructure in Europe. This commitment to the European market includes pledges to fight for its European customers in U.S. courts if necessary, highlighting the importance of trans-Atlantic ties and digital resilience. Microsoft is dedicated to ensuring open access to its AI and cloud platform across Europe, and will be enhancing its AI Access Principles in the coming months. Furthermore, Microsoft is releasing the 2025 Work Trend Index, designed to help leaders and employees navigate the shifting landscape brought about by AI. Recommended read:
References :
NVIDIA Newsroom@NVIDIA Blog
//
Nvidia has announced a major initiative to manufacture its AI supercomputers entirely within the United States. The company aims to produce up to $500 billion worth of AI infrastructure in the U.S. over the next four years, partnering with major manufacturing firms like Taiwan Semiconductor Manufacturing Co (TSMC), Foxconn, Wistron, Amkor, and SPIL. This move marks the first time Nvidia will carry out chip packaging and supercomputer assembly entirely within the United States. The company sees this effort as a way to meet the increasing demand for AI chips, strengthen its supply chain, and boost resilience.
Nvidia is commissioning over a million square feet of manufacturing space to build and test Blackwell chips in Arizona and assemble AI supercomputers in Texas. Production of Blackwell chips has already begun at TSMC’s chip plants in Phoenix, Arizona. The company is also constructing supercomputer manufacturing plants in Texas, partnering with Foxconn in Houston and Wistron in Dallas, with mass production expected to ramp up within the next 12-15 months. These facilities are designed to support the deployment of "gigawatt AI factories", data centers specifically built for processing artificial intelligence. CEO Jensen Huang emphasized the significance of bringing AI infrastructure manufacturing to the U.S., stating that "The engines of the world’s AI infrastructure are being built in the United States for the first time." Nvidia also plans to deploy its own technologies to optimize the design and operation of the new facilities, utilizing platforms like Omniverse to simulate factory operations and Isaac GR00T to develop automated robotics systems. The company said domestic production could help drive long-term economic growth and job creation. Recommended read:
References :
NVIDIA Newsroom@NVIDIA Blog
//
Nvidia has announced plans to manufacture its AI supercomputers entirely within the United States, marking the first time the company will conduct chip packaging and supercomputer assembly domestically. The move, driven by increasing global demand for AI chips and the potential impact of tariffs, aims to establish a resilient supply chain and bolster the American AI ecosystem. Nvidia is partnering with major manufacturing firms including TSMC, Foxconn, and Wistron to construct and operate these facilities.
Mass production of Blackwell chips has already commenced at TSMC's Phoenix, Arizona plant. Nvidia is constructing supercomputer manufacturing plants in Texas, partnering with Foxconn in Houston and Wistron in Dallas. These facilities are expected to ramp up production within the next 12-15 months. More than a million square feet of manufacturing space has been commissioned to build and test NVIDIA Blackwell chips in Arizona and AI supercomputers in Texas. The company anticipates producing up to $500 billion worth of AI infrastructure in the U.S. over the next four years through these partnerships. This includes designing and building "gigawatt AI factories" to produce NVIDIA AI supercomputers completely within the US. CEO Jensen Huang stated that American manufacturing will help meet the growing demand for AI chips and supercomputers, strengthen the supply chain and improve resiliency. The White House has lauded Nvidia's decision as "the Trump Effect in action". Recommended read:
References :
|
BenchmarksBlogsResearch Tools |