@techcrunch.com
//
OpenAI is making a bold move into hardware development with the acquisition of Jony Ive's startup, IO, in a deal valued at approximately $6.5 billion in stock. This strategic acquisition signals OpenAI's ambition to unify software, hardware, and data into a seamlessly integrated AI ecosystem. The company aims to move beyond its current role as a backend software provider, powering AI tools for other platforms, and instead, create a comprehensive, end-to-end AI experience. With this acquisition, around 55 engineers and designers, many formerly of Apple, will join OpenAI, while Ive's design firm, LoveFrom, will remain independent and oversee the development of OpenAI's initial hardware products.
This acquisition will allow OpenAI to have control over the whole interaction flow. It will no longer have to embed itself into someone else’s interface but design the experience end-to-end. It is about building a new kind of device, one built from the ground up around AI and something entirely reimagined for a world where AI is the starting point, not the add-on. OpenAI envisions the potential benefits of owning the hardware, including defining how AI behaves in context, gathering crucial user data, and exploring new avenues for monetization. By owning the device, OpenAI gains first-party access to behavioral signals like tone, timing, follow-ups, and habits across multiple sessions, data that is crucial for training models that are more adaptive, nuanced, and personalized. According to OpenAI CEO Sam Altman, the goal is to create AI devices so compelling that they become as commonplace as laptops or smartphones. Recommended read:
References :
Carl Franzen@AI News | VentureBeat
//
OpenAI is making significant strides in both its software and hardware development. The company's Responses API has received a rapid update, incorporating support for the Model Context Protocol (MCP), native image generation through GPT-4o, and additional features aimed at enterprise users. These enhancements are designed to facilitate the development of intelligent, action-oriented applications. The updated API, initially launched in March 2025, provides developers with the tools to build autonomous workflows, integrating functionalities such as web and file search, and computer use, streamlining the creation of AI agents.
OpenAI is also venturing into hardware, demonstrated by its acquisition of Jony Ive's AI hardware startup, io, for $6.5 billion. This acquisition signals OpenAI's ambition to redefine human-computer interaction through AI-native devices. Jony Ive, the former Apple design chief, and his LoveFrom team will assume a central role in OpenAI's design efforts, influencing the company's entire design approach and future product development. The move aims to position OpenAI as a competitor to tech giants like Apple and Google in the consumer electronics market. According to a leaked recording from an OpenAI staff meeting, CEO Sam Altman hinted at a secret project in collaboration with Jony Ive, focusing on developing AI "companion" devices. These devices are envisioned as a core part of everyday life, potentially replacing or reducing reliance on screens. While specific details are scarce, the device is expected to be small and contextually aware of the user's surroundings. Altman expressed hopes of shipping 100 million of these AI companions, though he acknowledged that wouldn’t be possible on day one. Recommended read:
References :
Shannon Carroll@Quartz
//
OpenAI is making a significant push into the hardware sector by acquiring io, the design startup founded by former Apple design chief Jony Ive, in a $6.5 billion deal. This move signifies OpenAI's ambition to create a new generation of AI-powered devices that move beyond current limitations of smartphones and laptops. The collaboration has been ongoing behind the scenes since 2023, with Ive and OpenAI CEO Sam Altman aiming to craft products that make AI more accessible and intuitive. The acquisition includes bringing over 50 engineers and designers from io, including ex-Apple veterans responsible for iconic designs like the iPhone and iPad.
OpenAI and Ive's vision is to revolutionize how we interact with technology. The goal is to develop AI-native devices that seamlessly blend into daily life and enhance AI experiences. Specific product details remain under wraps, but the initial device is rumored to be a pocket-sized gadget without a screen, capable of understanding its user's surroundings and activities. It's designed to complement existing devices like laptops and phones, potentially becoming a "third core device." Altman has even set a target of shipping 100 million units, potentially hitting that mark faster than any other company shipping something new before. This acquisition marks a strategic shift for OpenAI, venturing into consumer-facing products and directly competing with tech giants like Google, Apple, and Microsoft. Jony Ive's design firm, LoveFrom, will take charge of creative work across OpenAI, influencing not only hardware but also the look and feel of all products. Peter Welinder, an early OpenAI executive, will lead the io division, overseeing the development of this new AI product line. AI experts are weighing in on the merger and how the new devices could reshape how we interact with technology. Recommended read:
References :
@blogs.nvidia.com
//
NVIDIA is significantly expanding its presence in the AI ecosystem through strategic partnerships and the introduction of innovative technologies. At Computex 2025, CEO Jensen Huang unveiled NVLink Fusion, a groundbreaking program that opens NVIDIA's high-speed NVLink interconnect technology to non-NVIDIA CPUs and accelerators. This move is poised to solidify NVIDIA's role as a central component in AI infrastructure, even in systems utilizing silicon from other vendors, including MediaTek, Marvell, Fujitsu, and Qualcomm. This initiative represents a major shift from NVIDIA's previously exclusive use of NVLink and is intended to enable the creation of semi-custom AI infrastructures tailored to specific needs.
This strategy ensures that while customers may incorporate rival chips, the underlying AI ecosystem remains firmly rooted in NVIDIA's technologies, including its GPUs, interconnects, and software stack. NVIDIA is also teaming up with Foxconn to construct an AI supercomputer in Taiwan, further demonstrating its commitment to advancing AI capabilities in the region. The collaboration will see Foxconn subsidiary, Big Innovation Company, delivering the infrastructure for 10,000 NVIDIA Blackwell GPUs. This substantial investment aims to empower Taiwanese organizations by providing the necessary AI cloud computing resources to facilitate the adoption of AI technologies across both private and public sectors. In addition to hardware advancements, NVIDIA is also investing in quantum computing research. Taiwan's National Center for High-Performance Computing (NCHC) is deploying a new NVIDIA-powered AI supercomputer designed to support climate science, quantum research, and the development of large language models. Built by ASUS, this supercomputer will feature NVIDIA HGX H200 systems with over 1,700 GPUs, along with other advanced NVIDIA technologies. This initiative aligns with NVIDIA's broader strategy to drive breakthroughs in sovereign AI, quantum computing, and advanced scientific computation, positioning Taiwan as a key hub for AI development and technological autonomy. Recommended read:
References :
Joe DeLaere@NVIDIA Technical Blog
//
NVIDIA has announced the opening of its NVLink technology to rival companies, a move revealed by CEO Jensen Huang at Computex 2025. The new program, called NVLink Fusion, allows companies making custom CPUs and accelerators to license the NVLink port designs. This opens the door for non-NVIDIA chips to integrate with NVIDIA's AI infrastructure, fostering a more flexible AI hardware ecosystem. MediaTek, Marvell, Fujitsu, and Qualcomm are among the early partners signing on to integrate their chips with NVIDIA's GPUs via NVLink Fusion.
NVIDIA's decision to extend NVLink support is a strategic play to remain central to the AI landscape. By enabling companies to combine their custom silicon with NVIDIA's technology, NVIDIA ensures it remains essential to their AI strategies and potentially captures additional revenue streams. NVLink Fusion allows for semi-custom AI infrastructure where other processors are involved, but the underlying connective tissue belongs to NVIDIA. The high-speed interconnect offers significantly higher bandwidth compared to PCIe 5.0, offering advantages for CPU-to-GPU communications. This expansion doesn't mean NVIDIA is entirely opening the interconnect standard. Connecting an Intel CPU to an AMD GPU directly using NVLink Fusion remains impossible. NVIDIA is essentially allowing semi-custom accelerator designs to take advantage of the high-speed interconnect even if the accelerator isn't designed by NVIDIA. As part of the announcement, NVIDIA also unveiled its next-generation Grace Blackwell systems and a new AI platform called DGX Cloud Lepton, further solidifying its position in the AI compute market. Recommended read:
References :
Joe DeLaere@NVIDIA Technical Blog
//
NVIDIA has unveiled NVLink Fusion, a technology that expands the capabilities of its high-speed NVLink interconnect to custom CPUs and ASICs. This move allows customers to integrate non-NVIDIA CPUs or accelerators with NVIDIA's GPUs within their rack-scale setups, fostering the creation of heterogeneous computing environments tailored for diverse AI workloads. This technology opens up the possibility of designing semi-custom AI infrastructure with NVIDIA's NVLink ecosystem, allowing hyperscalers to leverage the innovations in NVLink, NVIDIA NVLink-C2C, NVIDIA Grace CPU, NVIDIA GPUs, NVIDIA Co-Packaged Optics networking, rack scale architecture, and NVIDIA Mission Control software.
NVLink Fusion enables users to deliver top performance scaling with semi-custom ASICS or CPUs. As hyperscalers are already deploying full NVIDIA rack solutions, this expansion caters to the increasing demand for specialized AI factories, where diverse accelerators work together at rack-scale with maximal bandwidth and minimal latency to support the largest number of users in the most power-efficient way. The advantage of using NVLink for CPU-to-GPU communications is that it offers 14x higher bandwidth compared to PCIe 5.0 (128 GB/s). The technology will be offered in two configurations. The first will be for connecting custom CPUs to Nvidia GPUs. NVIDIA CEO Jensen Huang emphasized that AI is becoming a fundamental infrastructure, akin to the internet and electricity. He envisions an AI infrastructure industry worth trillions of dollars, powered by AI factories that produce valuable tokens. NVIDIA's approach involves expanding its ecosystem through partnerships and platforms like CUDA-X, which is used across a range of applications. NVLink Fusion is a crucial part of this vision, enabling the construction of semi-custom AI systems and solidifying NVIDIA's role at the center of AI development. Recommended read:
References :
ashilov@gmail.com (Anton@tomshardware.com
//
References:
R. Scott Raynovich
, www.tomshardware.com
Nvidia CEO Jensen Huang has expressed concern about the rising competition from Huawei in the artificial intelligence hardware sector. Huang admitted that he is fearful of Huawei, acknowledging the company's significant progress in computing, networking technology, and software capabilities, all essential for advancing AI. He noted that China is not far behind the U.S. in AI capabilities, almost on par, particularly in AI hardware development. Huang's comments came during the Hill and Valley Forum, where business leaders and lawmakers discussed technology and national security.
China's advancements in AI hardware are driven by numerous companies, with Huawei leading the pack. Huawei's AI strategy encompasses everything from its Ascend 900-series AI accelerators to servers and rack-scale solutions for cloud data centers. The company recently unveiled CloudMatrix 384, a system packing 384 dual-chiplet HiSilicon Ascend 910C processors interconnected using a fully optical mesh network. Huawei has already sold over ten CloudMatrix 384 systems to Chinese customers, indicating a growing interest in domestic alternatives to Nvidia hardware. The CloudMatrix 384 system spans 16 racks and achieves roughly 300 PFLOPs of dense BF16 compute, nearly double Nvidia's GB200 NVL72. While it offers superior memory bandwidth and HBM capacity, it consumes more power per FLOP. Despite these differences, Huang recognized Huawei as one of the most formidable technology companies in the world, highlighting their incredible progress in recent years and the potential threat they pose to Nvidia's dominance in the AI hardware market. Recommended read:
References :
@blogs.nvidia.com
//
Nvidia is currently facing pressure from the U.S. government regarding AI GPU export rules. CEO Jensen Huang has been advocating for the Trump administration to relax these restrictions, arguing they hinder American companies' ability to compete in the global market. Huang stated at the Hill and Valley Forum that China is not far behind the U.S. in AI capabilities, emphasizing the need to accelerate the diffusion of American AI technology worldwide. He also acknowledged Huawei's progress in computing, networking, and software, noting their development of the CloudMatrix 384 system. This system, powered by Ascend 910C accelerators, is considered competitive with Nvidia's GB200 NVL72, signaling the emergence of domestic alternatives in China.
Despite Nvidia's pleas, the Trump administration is considering tighter controls on AI GPU exports. The administration plans to use chip access as leverage in trade negotiations with other nations. This approach contrasts with Nvidia's view that restricting exports will only fuel the development of competing hardware and software in countries like China. According to the AI Diffusion framework, access to advanced AI chips like Nvidia’s H100 is only unrestricted for companies based in the U.S. and "Tier 1" nations, while those in "Tier 2" nations face annual limits and "Tier 3" countries are effectively barred. Adding to the complexity, Nvidia is also engaged in a public dispute with AI startup Anthropic over the export restrictions. Anthropic has endorsed the Biden-era "AI Diffusion Rule" and has claimed there has been chip smuggling to China. An Nvidia spokesperson dismissed Anthropic's claims about chip smuggling tactics as "tall tales," arguing that American firms should focus on innovation instead of trying to manipulate policy for competitive advantage. As the May 15th export controls deadline approaches, the tensions continue to rise within the AI industry over the balance between national security, economic prosperity, and global competitiveness. Recommended read:
References :
Harsh Sharma@TechDator
//
Huawei is intensifying its challenge to Nvidia in the Chinese AI market by preparing to ship its Ascend 910C AI chips in large volumes. This move comes at a crucial time as Chinese tech firms are actively seeking domestic alternatives to Nvidia's H20 chip, which is now subject to U.S. export restrictions. The Ascend 910C aims to bolster China's tech independence, providing a homegrown solution amidst limited access to foreign chips. The chip combines two 910B processors into one package, utilizing advanced integration to rival the performance of Nvidia’s H100.
Huawei's strategy involves a multi-pronged approach. Late last year, the company sent Ascend 910C samples to Chinese tech firms and began taking early orders. Deliveries have already started, signaling Huawei's readiness to scale up production. While the 910C may not surpass Nvidia's newer B200, it is designed to meet the needs of Chinese developers who are restricted from accessing foreign options. The production of the Ascend 910C involves a complex supply chain, with parts crafted by China's Semiconductor Manufacturing International Corporation (SMIC) using its N+2 7nm process. Despite the challenges from Huawei, Nvidia remains committed to the Chinese market. Nvidia is reportedly collaborating with DeepSeek, a local AI leader, to develop chips within China using domestic factories and materials. This plan includes establishing research teams in China and utilizing SMIC, along with local memory makers and packaging partners, to produce China-specific chips. CEO Jensen Huang has affirmed that Nvidia will continue to make significant efforts to optimize its products to comply with regulations and serve Chinese companies, even amidst ongoing trade tensions and tariffs. Recommended read:
References :
@hothardware.com
//
References:
hothardware.com
, insideAI News
,
Nvidia's Blackwell architecture is making significant strides in both the AI and gaming sectors. The GeForce RTX 5060 Ti, priced at $429, brings the Blackwell architecture to mainstream gamers, targeting 1440p gaming with power efficiency and overclocking headroom. Reviews indicate that the RTX 5060 Ti is built around the GB206 GPU and is the full implementation of the chip, with roughly 21.9 billion transistors manufactured on TSMC's 4N node. It is arranged into 3 GPCs. It is the first of the Blackwell cards to target under $500 and should see peak performance realized in advanced AI rendering techniques like DLSS4 with multi-frame generation.
CoreWeave, a GPU cloud platform, is among the first to bring Nvidia's Grace Blackwell GB200 NVL72 systems online at scale. Companies like Cohere, IBM, and Mistral AI are leveraging these systems for model training and deployment. Cohere is using Grace Blackwell Superchips to develop secure enterprise AI applications, with reported performance increases of up to 3x in training for 100 billion-parameter models. IBM is scaling its deployment to thousands of Blackwell GPUs on CoreWeave to train its Granite open-source AI models for IBM watsonx Orchestrate. Mistral AI is also utilizing the Blackwell GPUs to build the next generation of open-source AI models, reporting a 2x improvement in performance for dense model training. However, Nvidia faces challenges due to U.S. government restrictions on exports to China. The company is writing off $5.5 billion in GPUs as the U.S. government chokes off supply of H20s to China, highlighting geopolitical impacts on the tech industry. The U.S. government's concern stems from the potential use of these processors in Chinese supercomputers. In response, Nvidia is reportedly working to onshore the manufacturing of chips and other components, as well as the assembly of systems, to the United States. Recommended read:
References :
NVIDIA Newsroom@NVIDIA Blog
//
Nvidia has announced plans to manufacture its AI supercomputers entirely within the United States, marking the first time the company will conduct chip packaging and supercomputer assembly domestically. The move, driven by increasing global demand for AI chips and the potential impact of tariffs, aims to establish a resilient supply chain and bolster the American AI ecosystem. Nvidia is partnering with major manufacturing firms including TSMC, Foxconn, and Wistron to construct and operate these facilities.
Mass production of Blackwell chips has already commenced at TSMC's Phoenix, Arizona plant. Nvidia is constructing supercomputer manufacturing plants in Texas, partnering with Foxconn in Houston and Wistron in Dallas. These facilities are expected to ramp up production within the next 12-15 months. More than a million square feet of manufacturing space has been commissioned to build and test NVIDIA Blackwell chips in Arizona and AI supercomputers in Texas. The company anticipates producing up to $500 billion worth of AI infrastructure in the U.S. over the next four years through these partnerships. This includes designing and building "gigawatt AI factories" to produce NVIDIA AI supercomputers completely within the US. CEO Jensen Huang stated that American manufacturing will help meet the growing demand for AI chips and supercomputers, strengthen the supply chain and improve resiliency. The White House has lauded Nvidia's decision as "the Trump Effect in action". Recommended read:
References :
|
BenchmarksBlogsResearch Tools |