Sean Hollister@The Verge
//
Nvidia CEO Jensen Huang unveiled the company's next generation of AI chips at the GTC 2025 conference, including the Blackwell Ultra GB300 and Vera Rubin, its next AI superchips. During the conference, Huang emphasized the advancements in AI and his predictions for the industry's future, noting that AI is at an "inflection point" and highlighting the evolution of AI from perception to generative AI, and now to agentic AI which can understand context and generate answers. Nvidia's new roadmap includes the Blackwell Ultra, slated for release in the second half of 2025, and the Vera Rubin AI chip, expected in late 2026.
The Blackwell Ultra isn't built on a completely new architecture but it offers enhanced capabilities, including 20 petaflops of AI performance and 288GB of HBM3e memory. In addition to the chip announcements, Nvidia revealed that it is building the Nvidia Accelerated Quantum Research Center (NVAQC) in Boston, aimed at integrating quantum computing with AI supercomputers, despite Huang's recent claims that practical quantum systems are still decades away. The NVAQC will collaborate with institutions like the Harvard Quantum Initiative and MIT, with aims to solve challenging problems in quantum computing and enable accelerated quantum supercomputing. Recommended read:
References :
Jaime Hampton@BigDATAwire
//
NVIDIA's GTC 2025 showcased significant advancements in AI, marked by the unveiling of the Blackwell Ultra GPU and the Vera Rubin roadmap extending through 2027. CEO Jensen Huang emphasized a 40x AI performance leap with the Blackwell platform compared to its predecessor, Hopper, highlighting its crucial role in inference workloads. The conference also introduced open-source ‘Dynamo’ software and advancements in humanoid robotics, demonstrating NVIDIA’s commitment to pushing AI boundaries.
The Blackwell platform is now in full production, meeting incredible customer demand, and the Vera Rubin roadmap details the next generation of superchips expected in 2026. Huang also touted new DGX systems, highlighting the push towards photonic switches to handle growing data demands efficiently. Blackwell Ultra will offer 288GB of memory. NVIDIA claims the GB300 chip brings 1.5x more AI performance than the NVIDIA GB200. These advancements aim to bolster AI reasoning capabilities and energy efficiency, positioning NVIDIA to maintain its dominance in AI infrastructure. Recommended read:
References :
Chris McKay@Maginative
//
NVIDIA's GTC 2025 event showcased significant advancements in AI infrastructure, highlighting the Blackwell Ultra and Rubin architectures, along with several related technologies and partnerships. Jensen Huang, Nvidia CEO, delivered a keynote address outlining the company’s vision for the AI-powered future, emphasizing improvements in processor performance, network design, and memory capabilities. The Blackwell Ultra GPUs are being integrated into DGX systems to meet the rising demands of AI workloads, especially in inference and reasoning.
NVIDIA is also expanding its offerings beyond chips with the introduction of desktop AI supercomputers for developers. The DGX Station, powered by the GB300 Blackwell Ultra Superchip, aims to bring data center-level AI capabilities to a compact form factor. Nvidia introduced Dynamo, an open-source inference software engineered to maximize token revenue generation for AI factories deploying reasoning AI models. The presentation emphasized a clear roadmap for data center computing, advancements in AI reasoning capabilities, and bold moves into robotics and autonomous vehicles. Recommended read:
References :
@tomshardware.com
//
Nvidia has unveiled its next-generation data center GPU, the Blackwell Ultra, at its GTC event in San Jose. Expanding on the Blackwell architecture, the Blackwell Ultra GPU will be integrated into the DGX GB300 and DGX B300 systems. The DGX GB300 system, designed with a rack-scale, liquid-cooled architecture, is powered by the Grace Blackwell Ultra Superchip, combining 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell Ultra GPUs. Nvidia officially revealed its Blackwell Ultra B300 data center GPU, which packs up to 288GB of HBM3e memory and offers 1.5X the compute potential of the existing B200 solution.
The Blackwell Ultra GPU promises a 70x speedup in AI inference and reasoning compared to the previous Hopper-based generation. This improvement is achieved through hardware and networking advancements in the DGX GB300 system. Blackwell Ultra is designed to meet the demand for test-time scaling inference with a 1.5X increase in the FP4 compute. Nvidia's CEO, Jensen Huang, suggests that the new Blackwell chips render the previous generation obsolete, emphasizing the significant leap forward in AI infrastructure. Recommended read:
References :
ChinaTechNews.com Staff@ChinaTechNews.com
//
Nvidia Corp. has signaled a strong trajectory for AI-driven growth into 2025, bolstered by a solid fourth-quarter earnings and revenue beat. The company's revenue jumped 78% year-over-year, surpassing investor expectations, with earnings reaching $0.89 per share, exceeding estimates. Nvidia's guidance for the current quarter indicates continued growth, forecasting sales of $43 billion, which further demonstrates the company's confidence in sustained demand for its AI-related products.
Nvidia's success is attributed to the high demand for its GPUs, particularly for AI applications. The company has begun producing its next-generation Blackwell GPUs, with CEO Jensen Huang noting strong demand. Data Center revenue saw a remarkable increase of 93% year-over-year, reaching $35.6 billion. This performance underscores Nvidia's leadership in providing hardware for AI advancements and its pivotal role in the ongoing AI revolution. Recommended read:
References :
@techcrunch.com
//
References:
community.openai.com
, Bloomberg Technology
,
DeepSeek is rapidly becoming a major player in the AI field, attracting attention and concern from both US officials and established companies like OpenAI. There are allegations of DeepSeek circumventing US restrictions on advanced AI chip purchases. Reports indicate that the company obtained Nvidia chips through third-party transactions in Singapore, potentially violating export regulations. DeepSeek's growing influence is also evident in its AI model performance, which is now being used as a benchmark against which other models are being measured.
The competitive landscape is further complicated by the emergence of new AI models like the Allen Institute for AI's Tulu 3 405B, an open-source model that claims to surpass DeepSeek V3 and even OpenAI’s GPT-4o on specific benchmarks. In addition to the increased competition for AI superiority, there is discussion about protecting OpenAI from other competitors like DeepSeek including the use of watermarks and other methods to protect their IP. The European AI contender, Mistral AI, is reportedly losing ground to its US counterparts and facing significant challenges from DeepSeek's rise and may be losing market share and ARR to these other companies. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |