Cierra Choucair@thequantuminsider.com
//
Nvidia CEO Jensen Huang unveiled the company's latest advancements in AI and quantum computing at GTC 2025, emphasizing a clear roadmap for data center computing, AI reasoning, robotics, and autonomous vehicles. The centerpiece was the Blackwell platform, now in full production, boasting a 40x performance leap over its predecessor, Hopper, crucial for inference workloads. Nvidia is also countering the DeepSeek efficiency challenge, with focus on the Rubin AI chips slated for late 2026.
Nvidia is establishing the NVIDIA Accelerated Quantum Research Center (NVAQC) in Boston to integrate quantum hardware with AI supercomputers. The center will collaborate with industry leaders and top universities to address quantum computing challenges. NVAQC is set to begin operations later this year, supporting the broader quantum ecosystem by accelerating the transition from experimental to practical quantum computing. NVAQC will employ the NVIDIA GB200 NVL72 systems and CUDA-Q platform to power research on quantum simulations, hybrid quantum algorithms, and AI-driven quantum applications. Recommended read:
References :
Cierra Choucair@thequantuminsider.com
//
NVIDIA is establishing the Accelerated Quantum Research Center (NVAQC) in Boston to integrate quantum hardware with AI supercomputers. The aim of the NVAQC is to enable accelerated quantum supercomputing, addressing quantum computing challenges such as qubit noise and error correction. Commercial and academic partners will work with NVIDIA, with collaborations involving industry leaders like Quantinuum, Quantum Machines, and QuEra, as well as researchers from Harvard's HQI and MIT's EQuS.
NVIDIA's GB200 NVL72 systems and the CUDA-Q platform will power research on quantum simulations, hybrid quantum algorithms, and AI-driven quantum applications. The center will support the broader quantum ecosystem, accelerating the transition from experimental to practical quantum computing. Despite the CEO's recent statement that practical quantum systems are likely still 20 years away, this investment shows confidence in the long-term potential of the technology. Recommended read:
References :
Tris Warkentin@The Official Google Blog
//
Google AI has released Gemma 3, a new family of open-source AI models designed for efficient and on-device AI applications. Gemma 3 models are built with technology similar to Gemini 2.0, intended to run efficiently on a single GPU or TPU. The models are available in various sizes: 1B, 4B, 12B, and 27B parameters, with options for both pre-trained and instruction-tuned variants, allowing users to select the model that best fits their hardware and specific application needs.
Gemma 3 offers practical advantages including efficiency and portability. For example, the 27B version has demonstrated robust performance in evaluations while still being capable of running on a single GPU. The 4B, 12B, and 27B models are capable of processing both text and images, and supports more than 140 languages. The models have a context window of 128,000 tokens, making them well suited for tasks that require processing large amounts of information. Google has built safety protocols into Gemma 3, including a safety checker for images called ShieldGemma 2. Recommended read:
References :
Harry Goldstein@IEEE Spectrum
//
The quantum computing field is experiencing a surge in activity, with several significant developments reported recently. VTT Technical Research Centre of Finland and IQM Quantum Computers have unveiled Europe's first 50-qubit superconducting quantum computer, accessible to researchers and companies through the VTT QX quantum computing service. This milestone strengthens Finland's position as a global leader in quantum computing, following a phased development plan that began with a 5-qubit system in 2021.
Chinese researchers have also made headlines with their Zuchongzhi 3.0, a 105-qubit superconducting quantum processor. They claim it completed a computational task in seconds that would take the world’s most powerful supercomputer an estimated 6.4 billion years to replicate. While the task was a benchmark designed to favor quantum processors, it still reinforces the potential for quantum computational advantage. Also, Mitsubishi Electric and partners are collaborating to develop scalable quantum information processing by connecting multiple quantum devices in practical environments, addressing limitations in single quantum computers. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |