News from the AI & ML world

DeeperML - #aiinfrastructure

Jowi Morales@tomshardware.com //
NVIDIA is partnering with Germany and Deutsche Telekom to build Europe's first industrial AI cloud, a project hailed as one of the most ambitious tech endeavors in the continent. This initiative aims to establish Germany as a leader in AI manufacturing and innovation. NVIDIA's CEO, Jensen Huang, met with Chancellor Friedrich Merz to discuss the new partnerships that will drive breakthroughs on this AI cloud.

This "AI factory," located in Germany, will provide European industrial leaders with the computational power needed to revolutionize manufacturing processes, from design and engineering to simulation and robotics. The goal is to empower European industrial players to lead in simulation-first, AI-driven manufacturing. Deutsche Telekom's CEO, Timotheus Höttges, emphasized the urgency of seizing AI opportunities to revolutionize the industry and secure a leading position in global technology competition.

The first phase of the project will involve deploying 10,000 NVIDIA Blackwell GPUs across various high-performance systems, making it Germany's largest AI deployment. This infrastructure will also feature NVIDIA networking and AI software. NEURA Robotics, a German firm specializing in cognitive robotics, plans to utilize these resources to power its Neuraverse, a network where robots can learn from each other. This partnership between NVIDIA and Germany signifies a critical step towards achieving technological sovereignty in Europe and accelerating AI development across industries.

Recommended read:
References :
  • NVIDIA Newsroom: NVIDIA and Deutsche Telekom Partner to Advance Germany’s Sovereign AI
  • www.artificialintelligence-news.com: NVIDIA helps Germany lead Europe’s AI manufacturing race
  • www.tomshardware.com: Nvidia is building the 'world's first' industrial AI cloud—German facility to leverage 10,000 GPUs, DGX B200, and RTX Pro servers
  • AI News: NVIDIA helps Germany lead Europe’s AI manufacturing race
  • blogs.nvidia.com: NVIDIA and Deutsche Telekom Partner to Advance Germany’s Sovereign AI
  • MSSP feed for Latest: CrowdStrike and Nvidia Add LLM Security, Offer New Service for MSSPs
  • www.verdict.co.uk: Nvidia to develop industrial AI cloud for manufacturers in Europe
  • Verdict: Nvidia to develop industrial AI cloud for manufacturers in Europe
  • insideAI News: AMD Announces New GPUs, Development Platform, Rack Scale Architecture
  • insidehpc.com: AMD Announces New GPUs, Development Platform, Rack Scale Architecture
  • www.itpro.com: Nvidia, Deutsche Telekom team up for "sovereign" industrial AI cloud

@techinformed.com //
NVIDIA CEO Jensen Huang and UK Prime Minister Keir Starmer have recently joined forces at London Tech Week to solidify the UK's position as a leading force in AI. This collaborative effort aims to bolster the nation's digital infrastructure and promote AI development across various sectors. Starmer has committed £1 billion in investment to supercharge the AI sector, emphasizing the UK's ambition to be at the forefront of AI innovation rather than simply consuming the technology. Huang highlighted the UK's rich AI community, world-class universities, and significant AI capital investment as key factors positioning it for success.

The partnership includes the establishment of a dedicated NVIDIA AI Technology Center in the UK. This center will provide hands-on training in crucial areas such as AI, data science, and accelerated computing, with a specific focus on nurturing talent in foundation model building, embodied AI, materials science, and earth systems modeling. The initiative aims to tackle the existing AI skills gap, ensuring that the UK has a workforce capable of leveraging the new infrastructure and technologies being developed. Cloud providers are also stepping up with significant GPU deployments, with Nscale planning 10,000 NVIDIA Blackwell GPUs by late 2026 and Nebius revealing plans for an AI factory with 4,000 NVIDIA Blackwell GPUs.

Furthermore, the UK's financial sector is set to benefit from this AI push. The Financial Conduct Authority (FCA) is launching a 'supercharged sandbox' scheme, allowing banks and other City firms to experiment safely with NVIDIA AI products. This initiative aims to speed up innovation and boost UK growth by integrating AI into the financial sector. Potential applications include intercepting authorized push payment fraud and identifying stock market manipulation, showcasing the potential of AI to enhance customer service and data analytics within the financial industry.

Recommended read:
References :

@www.marktechpost.com //
References: thenewstack.io , MarkTechPost ,
Nvidia is reportedly developing a new AI chip, the B30, specifically tailored for the Chinese market to comply with U.S. export controls. This Blackwell-based alternative aims to offer multi-GPU scaling capabilities, potentially through NVLink or ConnectX-8 SuperNICs. While earlier reports suggested different names like RTX Pro 6000D or B40, B30 could be one variant within the BXX family. The design incorporates GB20X silicon, which also powers consumer-grade RTX 50 GPUs, but may exclude NVLink support seen in prior generations due to its absence in consumer-grade GPU dies.

Nvidia has also introduced Fast-dLLM, a training-free framework designed to enhance the inference speed of diffusion large language models (LLMs). Diffusion models, explored as an alternative to autoregressive models, promise faster decoding through simultaneous multi-token generation, enabled by bidirectional attention mechanisms. However, their practical application is limited by inefficient inference, largely due to the lack of key-value (KV) caching, which accelerates performance by reusing previously computed attention states. Fast-dLLM aims to address this by bringing KV caching and parallel decoding capabilities to diffusion LLMs, potentially surpassing autoregressive systems.

During his keynote speech at GTC 2025, Nvidia CEO Jensen Huang emphasized the accelerating pace of artificial intelligence development and the critical need for optimized AI infrastructure. He stated Nvidia would shift to the Blackwell architecture for future China-bound chips, discontinuing Hopper-based alternatives following the H20 ban. Huang's focus on AI infrastructure highlights the industry's recognition of the importance of robust and scalable systems to support the growing demands of AI applications.

Recommended read:
References :
  • thenewstack.io: This article discusses Jensen Huang's keynote speech at GTC 2025, where he emphasized the acceleration of artificial intelligence development and outlined five key takeaways regarding optimizing AI infrastructure.
  • MarkTechPost: This article discusses NVIDIA's Fast-dLLM, a training-free framework that brings KV caching and parallel decoding to diffusion LLMs. It aims to improve inference speed in diffusion models, potentially surpassing autoregressive systems.
  • www.tomshardware.com: This article discusses the development of Nvidia's B30 AI chip specifically for the Chinese market. It highlights the potential inclusion of NVLink for multi-GPU scaling and the creation of high-performance clusters.

@futurumgroup.com //
NVIDIA reported a significant jump in Q1 FY 2026 revenue, increasing by 69% year-over-year to $44.1 billion. This growth was fueled by strong demand in both the data center and gaming segments, driven by the anticipation and initial deployments of their Blackwell architecture. Despite facing export restrictions in China related to H20, NVIDIA’s performance reflects sustained global demand for AI computing. The company is actively scaling Blackwell deployments while navigating these export-related challenges, supported by traction in sovereign AI initiatives which helped offset the headwinds in China.

NVIDIA's CEO, Jensen Huang, highlighted the full-scale production of the Blackwell NVL72 AI supercomputer, describing it as a "thinking machine" for reasoning. He emphasized the incredibly strong global demand for NVIDIA's AI infrastructure, noting a tenfold surge in AI inference token generation within a year. Huang anticipates that as AI agents become more mainstream, the demand for AI computing will accelerate further. The company's data center revenue reached $39.1 billion, a 73% increase year-over-year, showcasing the impact of Blackwell ramp up and the adoption of accelerated AI inference.

Beyond infrastructure, NVIDIA is also expanding its reach through strategic partnerships. NVIDIA and MediaTek are collaborating to develop an ARM-based mobile APU specifically designed for gaming laptops. This collaboration aims to combine NVIDIA’s graphics expertise with MediaTek’s compute capabilities to create a product that could rival AMD’s Strix Halo. The planned APU will focus on power efficiency and thermal performance, which are crucial for modern gaming laptops with thinner chassis.

Recommended read:
References :
  • blogs.nvidia.com: Since a 7.8-magnitude earthquake hit Syria and Türkiye two years ago — leaving 55,000 people dead, 130,000 injured and millions displaced from their homes — students, researchers and developers have been harnessing the latest AI robotics technologies to increase disaster preparedness in the region.
  • futurumgroup.com: Olivier Blanchard and Daniel Newman at Futurum analyse NVIDIA’s Q1 FY 2026 results.
  • www.club386.com: NVIDIA and MediaTek join forces to build an ARM-based mobile APU combining their compute and graphics expertise.

Heng Chi@AI Accelerator Institute //
AI is revolutionizing data management and analytics across various platforms. Amazon Web Services (AWS) is facilitating the development of high-performance data pipelines for AI and Natural Language Processing (NLP) applications, utilizing services like Amazon S3, AWS Lambda, AWS Glue, and Amazon SageMaker. These pipelines are essential for ingesting, processing, and providing output for training, inference, and decision-making at a large scale, leveraging AWS's scalability, flexibility, and cost-efficiency. AWS's auto-scaling options, seamless integration with ML and NLP workflows, and pay-as-you-go pricing model make it a preferred choice for businesses of all sizes.

Microsoft is simplifying data visualization with its new AI-powered tool, Data Formulator. This open-source application, developed by Microsoft Research, uses Large Language Models (LLMs) to transform data into interesting charts and graphs, even for users without extensive data manipulation and visualization knowledge. Data Formulator differentiates itself with its intuitive user interface and hybrid interactions, bridging the gap between visualization ideas and their actual creation. By supplementing natural language inputs with drag-and-drop interactions, it allows users to express visualization intent, with the AI handling the complex transformations in the background.

Yandex has released Yambda, the world's largest publicly available event dataset, to accelerate recommender systems research and development. This dataset contains nearly 5 billion anonymized user interaction events from Yandex Music, offering a valuable resource for bridging the gap between academic research and industry-scale applications. Yambda addresses the scarcity of large, openly accessible datasets in the field of recommender systems, which has traditionally lagged behind other AI domains due to the sensitive nature and commercial value of behavioral data. Additionally, Dremio is collaborating with Confluent’s TableFlow to provide real-time analytics on Apache Iceberg data, enabling users to stream data from Kafka into queryable tables without manual pipelines, accelerating insights and reducing ETL complexity.

Recommended read:
References :
  • insideAI News: NVIDIA and AMD Devising Export Rules-Compliant Chips for China AI Market
  • futurumgroup.com: Can Dell and NVIDIA’s AI Factory 2.0 Solve Enterprise-Scale AI Infrastructure Gaps?
  • TechHQ: Dell to build Nvidia Vera Rubin supercomputer for US Energy Department
  • techhq.com: Dell to build Nvidia Vera Rubin supercomputer for US Energy Department
  • futurumgroup.com: Can Dell Challenge Public Cloud AI with Its Expanded AI Factory?
  • insidehpc.com: DOE Announces “Doudnaâ€� Dell-NVIDIA Supercomputer at NERSC
  • techxplore.com: US supercomputer named after Nobel laureate Jennifer Doudna to power AI and scientific research
  • AI Accelerator Institute: Building efficient data pipelines for AI and NLP applications in AWS
  • www.dremio.com: Using Dremio with Confluent’s TableFlow for Real-Time Apache Iceberg Analytics
  • www.marktechpost.com: Yandex Releases Yambda: The World’s Largest Event Dataset to Accelerate Recommender Systems

@insidehpc.com //
MiTAC Computing Technology and AMD are strengthening their partnership to deliver cutting-edge solutions for AI, HPC, cloud-native, and enterprise applications. MiTAC will showcase this collaboration at COMPUTEX 2025, highlighting their shared vision for scalable and energy-efficient technologies. This partnership, which began in 2002, leverages AMD EPYC processors and Instinct GPUs to meet the evolving demands of modern data centers. Rick Hwang, President of MiTAC Computing Technology, emphasized their excitement in advancing server solutions powered by AMD's latest processors and GPUs, stating that it's key to unlocking new capabilities for their global customer base in AI and HPC infrastructure.

Specifically, MiTAC and AMD are developing next-generation server platforms. One notable product is an 8U server equipped with dual AMD EPYC 9005 Series processors and support for up to 8 AMD Instinct MI325X GPUs, offering exceptional compute density and up to 6TB of DDR5-6400 memory, ideal for large-scale AI model training and scientific applications. Additionally, they are offering a 2U dual-socket GPU server, that supports up to four dual-slot GPUs with 24 DDR5-6400 RDIMM slots and tool-less NVMe storage carriers, it offers high-speed throughput and flexibility for deep learning and HPC environments.

Meanwhile, Nvidia is preparing to compete with Huawei in the Chinese AI chip market by releasing a budget-friendly AI chip. This strategy is driven by the need to maintain relevance in the face of growing domestic competition and navigate export restrictions. The new chip, priced between $6,500 and $8,000, represents a significant cost reduction compared to the previously banned H20 model. This reduction involves trade-offs, such as using Nvidia's RTX Pro 6000D foundation with standard GDDR7 memory and foregoing Taiwan Semiconductor's advanced CoWoS packaging technology.

Recommended read:
References :
  • insideAI News: MiTAC Computing Technology Corporation, a server platform designer and manufacturer, will showcase its strategic collaboration with AMD at COMPUTEX 2025 (Booth M1110).
  • www.artificialintelligence-news.com: Nvidia is preparing to go head-to-head with Huawei to maintain its relevance in the booming AI chip market of China.
  • insidehpc.com: MiTAC Computing Technology Corporation, a server platform designer and manufacturer, will showcase its strategic collaboration with AMD at COMPUTEX 2025 (Booth M1110).

Stephen Warwick@tomshardware.com //
OpenAI is significantly expanding its AI infrastructure, with the launch of Stargate UAE marking the first international deployment of its Stargate AI platform. This expansion begins with a 1GW cluster in Abu Dhabi and is the first partnership under the OpenAI for Countries initiative, aimed at helping governments build sovereign AI capabilities. OpenAI says that coordination with the U.S. government was vital in making the expansion possible, highlighting the importance of democratic values, open markets, and trusted partnerships in this endeavor. The partnership includes reciprocal UAE investment into the U.S. Stargate infrastructure.

This ambitious project also promises new opportunities for developers. The OpenAI Responses API is now the first truly agentic API, which allows developers to combine MCP servers, code interpreter, reasoning, web search, and RAG - all within a single API call. This unified approach is set to enable the creation of a new generation of AI agents, streamlining the development process and expanding the capabilities of AI applications.

Recent details have emerged about Jony Ive and Sam Altman's collaboration on an AI device, codenamed "io," which OpenAI has acquired for $6.5 billion. The device is envisioned as a "central facet of using OpenAI," with Altman suggesting that subscribers to ChatGPT could receive new computers directly from the company. The aim is to create an AI "companion" that is entirely aware of a user’s surroundings and life, potentially evolving into a family of devices.

Recommended read:
References :

@blogs.nvidia.com //
NVIDIA is significantly expanding its presence in the AI ecosystem through strategic partnerships and the introduction of innovative technologies. At Computex 2025, CEO Jensen Huang unveiled NVLink Fusion, a groundbreaking program that opens NVIDIA's high-speed NVLink interconnect technology to non-NVIDIA CPUs and accelerators. This move is poised to solidify NVIDIA's role as a central component in AI infrastructure, even in systems utilizing silicon from other vendors, including MediaTek, Marvell, Fujitsu, and Qualcomm. This initiative represents a major shift from NVIDIA's previously exclusive use of NVLink and is intended to enable the creation of semi-custom AI infrastructures tailored to specific needs.

This strategy ensures that while customers may incorporate rival chips, the underlying AI ecosystem remains firmly rooted in NVIDIA's technologies, including its GPUs, interconnects, and software stack. NVIDIA is also teaming up with Foxconn to construct an AI supercomputer in Taiwan, further demonstrating its commitment to advancing AI capabilities in the region. The collaboration will see Foxconn subsidiary, Big Innovation Company, delivering the infrastructure for 10,000 NVIDIA Blackwell GPUs. This substantial investment aims to empower Taiwanese organizations by providing the necessary AI cloud computing resources to facilitate the adoption of AI technologies across both private and public sectors.

In addition to hardware advancements, NVIDIA is also investing in quantum computing research. Taiwan's National Center for High-Performance Computing (NCHC) is deploying a new NVIDIA-powered AI supercomputer designed to support climate science, quantum research, and the development of large language models. Built by ASUS, this supercomputer will feature NVIDIA HGX H200 systems with over 1,700 GPUs, along with other advanced NVIDIA technologies. This initiative aligns with NVIDIA's broader strategy to drive breakthroughs in sovereign AI, quantum computing, and advanced scientific computation, positioning Taiwan as a key hub for AI development and technological autonomy.

Recommended read:
References :
  • NVIDIA Newsroom: NVIDIA-Powered Supercomputer to Enable Quantum Leap for Taiwan Research
  • Maginative: NVIDIA Opens Its NVLink Ecosystem to Rivals in Bid to Further Cement AI Dominance
  • www.tomshardware.com: Nvidia teams up with Foxconn to build an AI supercomputer in Taiwan
  • NVIDIA Newsroom: Quantum computing promises to shorten the path to solving some of the world’s biggest computational challenges, from scaling in-silico drug design to optimizing otherwise impossibly complex, large-scale logistics problems.
  • blogs.nvidia.com: NVIDIA Grows Quantum Computing Ecosystem With Taiwan Manufacturers and Supercomputing
  • quantumcomputingreport.com: NVIDIA and AIST Launch ABCI-Q Supercomputer for Hybrid Quantum-AI Research
  • AI News | VentureBeat: Nvidia powers world’s largest quantum research supercomputer
  • the-decoder.com: NVIDIA is opening up its chip ecosystem
  • : NVIDIA and AIST Launch ABCI-Q Supercomputer for Hybrid Quantum-AI Research
  • techvro.com: NVLink Fusion: Nvidia To Sell Hybrid Systems Using AI Chips
  • AIwire: Nvidia’s Global Expansion: AI Factories, NVLink Fusion, AI Supercomputers, and More

Joe DeLaere@NVIDIA Technical Blog //
NVIDIA has announced the opening of its NVLink technology to rival companies, a move revealed by CEO Jensen Huang at Computex 2025. The new program, called NVLink Fusion, allows companies making custom CPUs and accelerators to license the NVLink port designs. This opens the door for non-NVIDIA chips to integrate with NVIDIA's AI infrastructure, fostering a more flexible AI hardware ecosystem. MediaTek, Marvell, Fujitsu, and Qualcomm are among the early partners signing on to integrate their chips with NVIDIA's GPUs via NVLink Fusion.

NVIDIA's decision to extend NVLink support is a strategic play to remain central to the AI landscape. By enabling companies to combine their custom silicon with NVIDIA's technology, NVIDIA ensures it remains essential to their AI strategies and potentially captures additional revenue streams. NVLink Fusion allows for semi-custom AI infrastructure where other processors are involved, but the underlying connective tissue belongs to NVIDIA. The high-speed interconnect offers significantly higher bandwidth compared to PCIe 5.0, offering advantages for CPU-to-GPU communications.

This expansion doesn't mean NVIDIA is entirely opening the interconnect standard. Connecting an Intel CPU to an AMD GPU directly using NVLink Fusion remains impossible. NVIDIA is essentially allowing semi-custom accelerator designs to take advantage of the high-speed interconnect even if the accelerator isn't designed by NVIDIA. As part of the announcement, NVIDIA also unveiled its next-generation Grace Blackwell systems and a new AI platform called DGX Cloud Lepton, further solidifying its position in the AI compute market.

Recommended read:
References :

Joe DeLaere@NVIDIA Technical Blog //
NVIDIA has unveiled NVLink Fusion, a technology that expands the capabilities of its high-speed NVLink interconnect to custom CPUs and ASICs. This move allows customers to integrate non-NVIDIA CPUs or accelerators with NVIDIA's GPUs within their rack-scale setups, fostering the creation of heterogeneous computing environments tailored for diverse AI workloads. This technology opens up the possibility of designing semi-custom AI infrastructure with NVIDIA's NVLink ecosystem, allowing hyperscalers to leverage the innovations in NVLink, NVIDIA NVLink-C2C, NVIDIA Grace CPU, NVIDIA GPUs, NVIDIA Co-Packaged Optics networking, rack scale architecture, and NVIDIA Mission Control software.

NVLink Fusion enables users to deliver top performance scaling with semi-custom ASICS or CPUs. As hyperscalers are already deploying full NVIDIA rack solutions, this expansion caters to the increasing demand for specialized AI factories, where diverse accelerators work together at rack-scale with maximal bandwidth and minimal latency to support the largest number of users in the most power-efficient way. The advantage of using NVLink for CPU-to-GPU communications is that it offers 14x higher bandwidth compared to PCIe 5.0 (128 GB/s). The technology will be offered in two configurations. The first will be for connecting custom CPUs to Nvidia GPUs.

NVIDIA CEO Jensen Huang emphasized that AI is becoming a fundamental infrastructure, akin to the internet and electricity. He envisions an AI infrastructure industry worth trillions of dollars, powered by AI factories that produce valuable tokens. NVIDIA's approach involves expanding its ecosystem through partnerships and platforms like CUDA-X, which is used across a range of applications. NVLink Fusion is a crucial part of this vision, enabling the construction of semi-custom AI systems and solidifying NVIDIA's role at the center of AI development.

Recommended read:
References :
  • The Register - Software: Nvidia opens up speedy NVLink interconnect to custom CPUs, ASICs
  • www.techmeme.com: Nvidia unveils NVLink Fusion, letting customers use its NVLink to pair non-Nvidia CPUs or accelerators with Nvidia's products in their own rack-scale setups (Bloomberg)
  • NVIDIA Technical Blog: Integrating Semi-Custom Compute into Rack-Scale Architecture with NVIDIA NVLink Fusion
  • Tom's Hardware: Nvidia announces NVLink Fusion to allow custom CPUs and AI Accelerators to work with its products Nvidia's NVLink Fusion program allows customers to use the company’s key NVLink tech for their own custom rack-scale designs with non-Nvidia CPUs or accelerators in tandem with Nvidia’s products.
  • Maginative: NVIDIA Opens Its NVLink Ecosystem to Rivals in Bid to Further Cement AI Dominance
  • NVIDIA Newsroom: NVIDIA-Powered Supercomputer to Enable Quantum Leap for Taiwan Research
  • AI News | VentureBeat: Foxconn builds AI factory in partnership with Taiwan and Nvidia
  • www.tomshardware.com: Nvidia teams up with Foxconn to build an AI supercomputer in Taiwan
  • The Next Platform: There are many reasons why Nvidia is the hardware juggernaut of the AI revolution, and one of them, without question, is the NVLink memory sharing port that started out on its “Pascal†P100 GOU accelerators way back in 2016.
  • www.nextplatform.com: Nvidia Licenses NVLink Memory Ports To CPU And Accelerator Makers
  • The Register - Software: Nvidia sets up shop in Taiwan with AI supers and a factory full of ambition
  • techvro.com: NVLink Fusion: Nvidia To Sell Hybrid Systems Using AI Chips
  • www.networkworld.com: Nvidia opens NVLink to competitive processors
  • AIwire: Nvidia’s Global Expansion: AI Factories, NVLink Fusion, AI Supercomputers, and More

@blogs.nvidia.com //
NVIDIA's CEO, Jensen Huang, has presented a bold vision for the future of technology, forecasting that the artificial intelligence infrastructure industry will soon be worth trillions of dollars. Huang emphasized AI's transformative potential across all sectors globally during his Computex 2025 keynote in Taipei. He envisions AI becoming as essential as electricity and the internet, necessitating "AI factories" to produce valuable tokens by applying energy. These factories are not simply data centers but sophisticated environments that will drive innovation and growth.

NVIDIA is actively working to solidify its position as a leader in this burgeoning AI landscape. A key strategy involves expanding its research and development footprint, with plans to establish a new R&D center in Shanghai. This initiative, proposed during a meeting with Shanghai Mayor Gong Zheng, includes leasing additional office space to accommodate current staff and future expansion. The Shanghai center will focus on tailoring AI solutions for Chinese clients and contributing to global R&D efforts in areas such as chip design verification, product optimization, and autonomous driving technologies, with the Shanghai government expressing initial support for the project.

Furthermore, NVIDIA is collaborating with Foxconn and the Taiwan government to construct an AI factory supercomputer, equipped with 10,000 NVIDIA Blackwell GPUs. This AI factory will provide state-of-the-art infrastructure to researchers, startups, and industries, significantly expanding AI computing availability and fueling innovation within Taiwan's technology ecosystem. Huang highlighted the importance of Taiwan in the global technology ecosystem, noting that NVIDIA is helping build AI not only for the world but also for Taiwan, emphasizing the strategic partnerships and investments crucial for realizing the AI-powered future.

Recommended read:
References :
  • NVIDIA Newsroom: NVIDIA CEO Jensen Huang took the stage at a packed Taipei Music Center Monday to kick off COMPUTEX 2025, captivating the audience of more than 4,000 with a vision for a technology revolution that will sweep every country
  • TechNode: NVIDIA reportedly plans to establish research center in Shanghai
  • SiliconANGLE: At Computex, Nvidia debuts AI GPU compute marketplace, NVLink Fusion and the future of humanoid AI
  • AI News | VentureBeat: Foxconn builds AI factory in partnership with Taiwan and Nvidia
  • The Register - Software: Nvidia opens up speedy NVLink interconnect to custom CPUs, ASICs

Pomi Lee@NVIDIA Technical Blog //
NVIDIA CEO Jensen Huang unveiled an ambitious vision for the future of AI at COMPUTEX 2025, declaring AI as the next major technology poised to transform every industry and country. He emphasized the need for "AI factories," describing them as specialized data centers that produce valuable "tokens" by applying energy. Huang highlighted NVIDIA's CUDA-X platform and its partnerships, showcasing how these are driving advancements in areas such as 6G development and quantum supercomputing. He stressed the importance of Taiwan in the global technology ecosystem.

NVIDIA is expanding its AI ecosystem by opening up its high-speed NVLink interconnect technology to custom CPUs and ASICs via NVLink Fusion. This move allows for greater integration of custom compute solutions into rack-scale architectures. The NVLink fabric, known for its high bandwidth capabilities, facilitates seamless communication between GPUs and CPUs, offering a significant advantage over PCIe 5.0. Nvidia is allowing semi-custom accelerator designs to take advantage of the high-speed interconnect - even for non-Nvidia-designed accelerators.

NVIDIA and Foxconn are partnering with the Taiwan government to construct an AI factory supercomputer, equipped with 10,000 NVIDIA Blackwell GPUs, to support local researchers and enterprises. This supercomputer, facilitated by Foxconn's Big Innovation Company, will provide AI cloud computing resources to the Taiwan technology ecosystem. This collaboration aims to accelerate AI development and adoption across various sectors, reinforcing Taiwan's position as a key player in the global AI landscape.

Recommended read:
References :

@Dataconomy //
Databricks has announced its acquisition of Neon, an open-source database startup specializing in serverless Postgres, in a deal reportedly valued at $1 billion. This strategic move is aimed at enhancing Databricks' AI infrastructure, specifically addressing the database bottleneck that often hampers the performance of AI agents. Neon's technology allows for the rapid creation and deployment of database instances, spinning up new databases in milliseconds, which is critical for the speed and scalability required by AI-driven applications. The integration of Neon's serverless Postgres architecture will enable Databricks to provide a more streamlined and efficient environment for building and running AI agents.

Databricks plans to incorporate Neon's scalable Postgres offering into its existing big data platform, eliminating the need to scale separate server and storage components in tandem when responding to AI workload spikes. This resolves a common issue in modern cloud architectures where users are forced to over-provision either compute or storage to meet the demands of the other. With Neon's serverless architecture, Databricks aims to provide instant provisioning, separation of compute and storage, and API-first management, enabling a more flexible and cost-effective solution for managing AI workloads. According to Databricks, Neon reports that 80% of its database instances are provisioned by software rather than humans.

The acquisition of Neon is expected to give Databricks a competitive edge, particularly against competitors like Snowflake. While Snowflake currently lacks similar AI-driven database provisioning capabilities, Databricks' integration of Neon's technology positions it as a leader in the next generation of AI application building. The combination of Databricks' existing data intelligence platform with Neon's serverless Postgres database will allow for the programmatic provisioning of databases in response to the needs of AI agents, overcoming the limitations of traditional, manually provisioned databases.

Recommended read:
References :
  • Databricks: Today, we are excited to announce that we have agreed to acquire Neon, a developer-first, serverless Postgres company.
  • www.infoworld.com: Databricks to acquire open-source database startup Neon to build the next wave of AI agents
  • www.bigdatawire.com: Databricks Nabs Neon to Solve AI Database Bottleneck
  • Dataconomy: Databricks has agreed to acquire Neon, an open-source database startup, for approximately $1 billion.
  • BigDATAwire: Databricks today announced its intent to buy Neon, a database startup founded by Nikita Shamgunov that develops a serverless and infinitely scalable version of the open source Postgres database.
  • Techzine Global: Neon’s technology can spin up a Postgres instance in less than 500 milliseconds, which is crucial for AI agents’ fast working methods.
  • AI News | VentureBeat: The $1 Billion database bet: What Databricks’ Neon acquisition means for your AI strategy
  • analyticsindiamag.com: Databricks to Acquire Database Startup Neon for $1 Billion

Evan Ackerman@IEEE Spectrum //
References: betanews.com , IEEE Spectrum , BetaNews ...
Amazon has unveiled Vulcan, an AI-powered robot with a sense of touch, designed for use in its fulfillment centers. This groundbreaking robot represents a "fundamental leap forward in robotics," according to Amazon's director of applied science, Aaron Parness. Vulcan is equipped with sensors that allow it to "feel" the objects it is handling, enabling capabilities previously unattainable for Amazon robots. This sense of touch allows Vulcan to manipulate objects with greater dexterity and avoid damaging them or other items nearby.

Vulcan operates using "end of arm tooling" that includes force feedback sensors. These sensors enable the robot to understand how hard it is pushing or holding an object, ensuring it remains below the damage threshold. Amazon says that Vulcan can easily manipulate objects to make room for whatever it’s stowing, because it knows when it makes contact and how much force it’s applying. Vulcan helps to bridge the gap between humans and robots, bringing greater dexterity to the devices.

The introduction of Vulcan addresses a significant challenge in Amazon's fulfillment centers, where the company handles a vast number of stock-keeping units (SKUs). While robots already play a crucial role in completing 75% of Amazon orders, Vulcan fills the ability gap of previous generations of robots. According to Amazon, one business per second is adopting AI, and Vulcan demonstrates the potential for AI and robotics to revolutionize warehouse operations. Amazon did not specify how many jobs the Vulcan model may create or displace.

Recommended read:
References :
  • betanews.com: Amazon unveils Vulcan, a package sorting, AI-powered robot with a sense of touch
  • IEEE Spectrum: Amazon’s Vulcan Robots Now Stow Items Faster Than Humans
  • www.linkedin.com: Amazon’s Vulcan Robots Are Mastering Picking Packages
  • BetaNews: Amazon has unveiled Vulcan, a package sorting, AI-powered robot with a sense of touch
  • techstrong.ai: Amazon’s Vulcan Has the ‘Touch’ to Handle Most Packages
  • eWEEK: Amazon’s Vulcan Robot with Sense of Touch: ‘Fundamental Leap Forward in Robotics’
  • www.eweek.com: Amazon’s Vulcan Robot with Sense of Touch: ‘Fundamental Leap Forward in Robotics’
  • techstrong.ai: Amazon’s Vulcan Has the ‘Touch’ to Handle Most Packages
  • IEEE Spectrum: Amazon’s Vulcan Robots Are Mastering Picking Packages
  • : This Amazon robot has a sense of feel
  • The Register: Amazon touts Vulcan – its first robot with a sense of 'touch'

@blogs.microsoft.com //
Microsoft is aggressively pursuing an AI-first strategy, aiming to transform business operations for its customers. A key component of this initiative is the development and deployment of agentic AI solutions. According to Microsoft CEO Nadella, a significant portion of Microsoft's code, specifically 20% to 30%, is now generated by AI, showcasing the company's deep integration of AI into its core development processes. This AI-driven approach promises to accelerate innovation and enable businesses to achieve more through autonomous capabilities.

Microsoft has officially launched Recall AI for Windows 11, an AI-powered search feature that captures periodic screenshots of user activity. This feature is available on Copilot+ PCs through the April 2025 non-security preview update. Recall aims to provide AI-driven memory search. Addressing earlier privacy concerns, Microsoft has ensured that Recall is disabled by default, requires opt-in, and encrypts all data locally. Access to this data requires Windows Hello authentication, and users can delete snapshots or block specific apps and websites from being recorded.

To further solidify its commitment to AI, Microsoft is expanding its cloud and AI infrastructure in Europe as part of five digital commitments. The company is also focused on helping organizations modernize their technology stacks to leverage AI effectively. According to a 2024 Forrester study, continuous modernization, including the incorporation of generative AI, is critical for driving competitive advantage. By setting a strong cloud foundation and embracing continuous migration and modernization, businesses can unlock the full potential of AI and remain competitive in a rapidly evolving technological landscape.

Recommended read:
References :
  • blogs.microsoft.com: How agentic AI is driving AI-first business transformation for customers to achieve more
  • www.microsoft.com: Accelerate AI innovation and business transformation: Scaling AI transformation with strategic cloud partnership

editors@tomshardware.com (Hassam@tomshardware.com //
Microsoft CEO Satya Nadella has revealed that Artificial Intelligence is playing an increasingly significant role in the company's software development. Speaking at Meta's LlamaCon conference, Nadella stated that AI now writes between 20% and 30% of the code in Microsoft's repositories and projects. This underscores the growing influence of AI in revolutionizing software creation, especially for repetitive and data-heavy tasks, leading to efficiency gains. Nadella mentioned that AI is showing more promise in generating Python code compared to C++, due to Python's simpler syntax and better memory management.

Microsoft's embrace of AI in coding aligns with similar trends observed at other tech giants like Google, where AI is reported to generate over 30% of new code. The use of AI in code generation also brings forth concerns about job displacement for new programmers. Despite these anxieties, industry experts highlight the importance of software developers adapting to and leveraging AI tools, rather than ignoring them. Nadella emphasized that while AI can produce code, senior developer oversight remains critical to ensure the stability and reliability of the production environment.

Beyond its internal use, Microsoft is also making strategic moves to expand its cloud and AI infrastructure in Europe. This commitment to the European market includes pledges to fight for its European customers in U.S. courts if necessary, highlighting the importance of trans-Atlantic ties and digital resilience. Microsoft is dedicated to ensuring open access to its AI and cloud platform across Europe, and will be enhancing its AI Access Principles in the coming months. Furthermore, Microsoft is releasing the 2025 Work Trend Index, designed to help leaders and employees navigate the shifting landscape brought about by AI.

Recommended read:
References :
  • news.microsoft.com: Microsoft Releases 2025 Work Trend Index: The Frontier Firm Emerges in Singapore
  • The Microsoft Cloud Blog: Accelerate AI innovation and business transformation: Scaling AI transformation with strategic cloud partnership
  • www.tomshardware.com: Satya Nadella revealed that AI writes as much as 20% to 30% of the code in Microsoft's repositories and projects.
  • TechCrunch: Microsoft CEO says up to 30% of the company’s code was written by AI.
  • Entrepreneur: AI is already writing about 30% of code at Microsoft, Google, and Meta.
  • PCWorld: Microsoft's CEO claims 30% of its new code is written by AI.
  • blogs.microsoft.com: Microsoft is announcing five digital commitments to Europe, starting with an expansion of our cloud and AI infrastructure in Europe.
  • CIO Dive - Latest News: Microsoft expands European footprint amid global trade tensions
  • PCMag Middle East ai: Microsoft Says Up to 30% of Its Code Now Written by AI, Meta Aims For 50% in 2026
  • SiliconANGLE: Satya Nadella says AI is now writing 30% of Microsoft’s code but real change is still many years away
  • The Register - Software: 30 percent of some Microsoft code now written by AI - especially the new stuff
  • siliconangle.com: Satya Nadella says AI is now writing 30% of Microsoft’s code but real change is still many years away
  • MarkTechPost: Microsoft AI Released Phi-4-Reasoning: A 14B Parameter Open-Weight Reasoning Model that Achieves Strong Performance on Complex Reasoning Tasks
  • Analytics Vidhya: Microsoft Launches Two Powerful Phi-4 Reasoning Models
  • www.windowscentral.com: Satya Nadella says AI already writes 30% of Microsoft's code — but Bill Gates claims software development is too complex to be fully automated
  • The Next Platform: AI Steady, Cloud Accelerating Gives Microsoft A Big Datacenter Boost

NVIDIA Newsroom@NVIDIA Blog //
Nvidia has announced a major initiative to manufacture its AI supercomputers entirely within the United States. The company aims to produce up to $500 billion worth of AI infrastructure in the U.S. over the next four years, partnering with major manufacturing firms like Taiwan Semiconductor Manufacturing Co (TSMC), Foxconn, Wistron, Amkor, and SPIL. This move marks the first time Nvidia will carry out chip packaging and supercomputer assembly entirely within the United States. The company sees this effort as a way to meet the increasing demand for AI chips, strengthen its supply chain, and boost resilience.

Nvidia is commissioning over a million square feet of manufacturing space to build and test Blackwell chips in Arizona and assemble AI supercomputers in Texas. Production of Blackwell chips has already begun at TSMC’s chip plants in Phoenix, Arizona. The company is also constructing supercomputer manufacturing plants in Texas, partnering with Foxconn in Houston and Wistron in Dallas, with mass production expected to ramp up within the next 12-15 months. These facilities are designed to support the deployment of "gigawatt AI factories", data centers specifically built for processing artificial intelligence.

CEO Jensen Huang emphasized the significance of bringing AI infrastructure manufacturing to the U.S., stating that "The engines of the world’s AI infrastructure are being built in the United States for the first time." Nvidia also plans to deploy its own technologies to optimize the design and operation of the new facilities, utilizing platforms like Omniverse to simulate factory operations and Isaac GR00T to develop automated robotics systems. The company said domestic production could help drive long-term economic growth and job creation.

Recommended read:
References :
  • Reid Burke: NVIDIA is working with its manufacturing partners to design and build factories that, for the first time, will produce NVIDIA AI supercomputers entirely in the U.S.
  • The Register - Software: Nvidia wants to build and sell up to half a trillion US dollars of American-made AI supercomputer equipment over the next four years, with the help of Taiwan Semiconductor Manufacturing Co, aka TSMC, and its partners.
  • TechInformed: Nvidia has announced plans to manufacture AI supercomputers in the United States for the first time.
  • AIwire: Nvidia Begins US Production of Blackwell Chips, AI Systems to Follow
  • www.tomshardware.com: Nvidia aims to build $500 billion worth of AI servers in the USA by 2029
  • www.techrepublic.com: NVIDIA’s Vision For AI Factories – ‘Major Trend in the Data Center World’
  • www.tomshardware.com: Made in the USA: Inside Nvidia's $500 billion server gambit
  • www.theguardian.com: Jensen Huang causes stir on social media and is reported to have met founder of AI company DeepSeek The chief executive of the American chip maker Nvidia visited Beijing on Thursday, days after the US on sales of the only AI chip it was still allowed to sell to China.

NVIDIA Newsroom@NVIDIA Blog //
Nvidia has announced plans to manufacture its AI supercomputers entirely within the United States, marking the first time the company will conduct chip packaging and supercomputer assembly domestically. The move, driven by increasing global demand for AI chips and the potential impact of tariffs, aims to establish a resilient supply chain and bolster the American AI ecosystem. Nvidia is partnering with major manufacturing firms including TSMC, Foxconn, and Wistron to construct and operate these facilities.

Mass production of Blackwell chips has already commenced at TSMC's Phoenix, Arizona plant. Nvidia is constructing supercomputer manufacturing plants in Texas, partnering with Foxconn in Houston and Wistron in Dallas. These facilities are expected to ramp up production within the next 12-15 months. More than a million square feet of manufacturing space has been commissioned to build and test NVIDIA Blackwell chips in Arizona and AI supercomputers in Texas.

The company anticipates producing up to $500 billion worth of AI infrastructure in the U.S. over the next four years through these partnerships. This includes designing and building "gigawatt AI factories" to produce NVIDIA AI supercomputers completely within the US. CEO Jensen Huang stated that American manufacturing will help meet the growing demand for AI chips and supercomputers, strengthen the supply chain and improve resiliency. The White House has lauded Nvidia's decision as "the Trump Effect in action".

Recommended read:
References :
  • Reid Burke: NVIDIA to Manufacture American-Made AI Supercomputers in US for First Time
  • insideAI News: NVIDIA today said it is working with manufacturing partners to design and build factories that will produce NVIDIA AI supercomputers — i.e., “AI factories†— entirely in the United States… NVIDIA said that within four years, it plans to produce up to half a trillion dollars worth of AI infrastructure in the U.S. through partnerships ....
  • AIwire: Nvidia Begins US Production of Blackwell Chips, AI Systems to Follow
  • www.theguardian.com: Nvidia says it will build up to $500bn of US AI infrastructure as chip tariff looms
  • www.tomshardware.com: Nvidia aims to build $500 billion worth of AI servers in the USA by 2029
  • Analytics India Magazine: NVIDIA to Manufacture First American-Made AI Supercomputers
  • THE DECODER: Nvidia shifts AI production to US amid changing trade landscape
  • AI News | VentureBeat: Nvidia pledges to build its own factories in the U.S. for the first time to make AI supercomputers
  • www.cnbc.com: Nvidia to mass produce AI supercomputers in Texas as part of $500 billion U.S. push
  • the-decoder.com: Nvidia shifts AI production to US amid changing trade landscape
  • NVIDIA Newsroom: Everywhere, All at Once: NVIDIA Drives the Next Phase of AI Growth
  • blogs.nvidia.com: NVIDIA to Manufacture American-Made AI Supercomputers in US for First Time
  • CIO Dive - Latest News: Nvidia pledges to invest up to $500B in US chip manufacturing
  • analyticsindiamag.com: NVIDIA to Manufacture First American-Made AI Supercomputers