News from the AI & ML world

DeeperML - #computing

Cierra Choucair@The Quantum Insider // 26d
Nvidia CEO Jensen Huang unveiled the company's latest advancements in AI and quantum computing at GTC 2025, emphasizing a clear roadmap for data center computing, AI reasoning, robotics, and autonomous vehicles. The centerpiece was the Blackwell platform, now in full production, boasting a 40x performance leap over its predecessor, Hopper, crucial for inference workloads. Nvidia is also countering the DeepSeek efficiency challenge, with focus on the Rubin AI chips slated for late 2026.

Nvidia is establishing the NVIDIA Accelerated Quantum Research Center (NVAQC) in Boston to integrate quantum hardware with AI supercomputers. The center will collaborate with industry leaders and top universities to address quantum computing challenges. NVAQC is set to begin operations later this year, supporting the broader quantum ecosystem by accelerating the transition from experimental to practical quantum computing. NVAQC will employ the NVIDIA GB200 NVL72 systems and CUDA-Q platform to power research on quantum simulations, hybrid quantum algorithms, and AI-driven quantum applications.

Recommended read:
References :
  • techxplore.com: Nvidia CEO Jensen Huang unveils new Rubin AI chips at GTC 2025
  • venturebeat.com: Nvidia’s GTC 2025 keynote: 40x AI performance leap, open-source ‘Dynamo’, and a walking Star Wars-inspired ‘Blue’ robot
  • The Quantum Insider: The NVIDIA Accelerated Quantum Research Center Will Bring Together Industry Partners and AI Supercomputers to Advance QPU's
  • BigDATAwire: Today marks the end of Nvidia’s GPU Technology Conference (GTC) 2025, a weeklong event in San Jose, California that be remembered for a long time, if not for the content
  • TheSequence: The announcements at GTC showcased covered both AI chips and models.
  • The Quantum Insider: NVIDIA’s Quantum Strategy: Not Building the Computer, But the World That Enables It

Cierra Choucair@The Quantum Insider // 27d
NVIDIA is establishing the Accelerated Quantum Research Center (NVAQC) in Boston to integrate quantum hardware with AI supercomputers. The aim of the NVAQC is to enable accelerated quantum supercomputing, addressing quantum computing challenges such as qubit noise and error correction. Commercial and academic partners will work with NVIDIA, with collaborations involving industry leaders like Quantinuum, Quantum Machines, and QuEra, as well as researchers from Harvard's HQI and MIT's EQuS.

NVIDIA's GB200 NVL72 systems and the CUDA-Q platform will power research on quantum simulations, hybrid quantum algorithms, and AI-driven quantum applications. The center will support the broader quantum ecosystem, accelerating the transition from experimental to practical quantum computing. Despite the CEO's recent statement that practical quantum systems are likely still 20 years away, this investment shows confidence in the long-term potential of the technology.

Recommended read:
References :
  • The Register - On-Prem: Nvidia invests in quantum computing weeks after CEO said it's decades from being useful
  • The Quantum Insider: NVIDIA Launches Boston-Based Quantum Research Center to Integrate AI Supercomputing with Quantum Computing
  • AI News | VentureBeat: Nvidia will build accelerated quantum computing research center
  • The Quantum Insider: NVIDIA’s Quantum Strategy: Not Building the Computer, But the World That Enables It
  • The Quantum Insider: Quantum Machines Announces NVIDIA DGX Quantum Early Access Program

Tris Warkentin@The Official Google Blog // 35d
Google AI has released Gemma 3, a new family of open-source AI models designed for efficient and on-device AI applications. Gemma 3 models are built with technology similar to Gemini 2.0, intended to run efficiently on a single GPU or TPU. The models are available in various sizes: 1B, 4B, 12B, and 27B parameters, with options for both pre-trained and instruction-tuned variants, allowing users to select the model that best fits their hardware and specific application needs.

Gemma 3 offers practical advantages including efficiency and portability. For example, the 27B version has demonstrated robust performance in evaluations while still being capable of running on a single GPU. The 4B, 12B, and 27B models are capable of processing both text and images, and supports more than 140 languages. The models have a context window of 128,000 tokens, making them well suited for tasks that require processing large amounts of information. Google has built safety protocols into Gemma 3, including a safety checker for images called ShieldGemma 2.

Recommended read:
References :
  • MarkTechPost: Google AI Releases Gemma 3: Lightweight Multimodal Open Models for Efficient and On‑Device AI
  • The Official Google Blog: Introducing Gemma 3: The most capable model you can run on a single GPU or TPU
  • AI News | VentureBeat: Google unveils open source Gemma 3 model with 128k context window
  • AI News: Details on the launch of Gemma 3 open AI models by Google.
  • The Verge: Google calls Gemma 3 the most powerful AI model you can run on one GPU
  • Maginative: Google DeepMind’s Gemma 3 Brings Multimodal AI, 128K Context Window, and More
  • TestingCatalog: Gemma 3 sets new benchmarks for open compact models with top score on LMarena
  • AI & Machine Learning: Announcing Gemma 3 on Vertex AI
  • Analytics Vidhya: Gemma 3 vs DeepSeek-R1: Is Google’s New 27B Model a Tough Competition to the 671B Giant?
  • AI & Machine Learning: How to deploy serverless AI with Gemma 3 on Cloud Run
  • The Tech Portal: Google rolls outs Gemma 3, its latest collection of lightweight AI models
  • eWEEK: Google’s Gemma 3: Does the ‘World’s Best Single-Accelerator Model’ Outperform DeepSeek-V3?
  • The Tech Basic: Gemma 3 by Google: Multilingual AI with Image and Video Analysis
  • Analytics Vidhya: Google’s Gemma 3: Features, Benchmarks, Performance and Implementation
  • www.infoworld.com: Google unveils Gemma 3 multi-modal AI models
  • www.zdnet.com: Google claims Gemma 3 reaches 98% of DeepSeek's accuracy - using only one GPU
  • AIwire: Google unveiled open source Gemma 3, is multimodal, comes in four sizes and can now handle more information and instructions thanks to a larger context window. The post appeared first on .
  • Ars OpenForum: Google’s new Gemma 3 AI model is optimized to run on a single GPU
  • THE DECODER: Google DeepMind has unveiled Gemma 3, a new generation of open AI models designed to deliver high performance with a relatively small footprint, making them suitable for running on individual GPUs or TPUs.
  • Gradient Flow: Gemma 3: What You Need To Know
  • Interconnects: Gemma 3, OLMo 2 32B, and the growing potential of open-source AI
  • OODAloop: Gemma 3, Google's newest lightweight, open-source AI model, is designed for multimodal tasks and efficient deployment on various devices.
  • NVIDIA Technical Blog: Google has released lightweight, multimodal, multilingual models called Gemma 3. The models are designed to run efficiently on phones and laptops.
  • LessWrong: Google DeepMind has unveiled Gemma 3, a new generation of open AI models designed to deliver high performance with a relatively small footprint, making them suitable for running on individual GPUs or TPUs.

Harry Goldstein@IEEE Spectrum // 40d
The quantum computing field is experiencing a surge in activity, with several significant developments reported recently. VTT Technical Research Centre of Finland and IQM Quantum Computers have unveiled Europe's first 50-qubit superconducting quantum computer, accessible to researchers and companies through the VTT QX quantum computing service. This milestone strengthens Finland's position as a global leader in quantum computing, following a phased development plan that began with a 5-qubit system in 2021.

Chinese researchers have also made headlines with their Zuchongzhi 3.0, a 105-qubit superconducting quantum processor. They claim it completed a computational task in seconds that would take the world’s most powerful supercomputer an estimated 6.4 billion years to replicate. While the task was a benchmark designed to favor quantum processors, it still reinforces the potential for quantum computational advantage. Also, Mitsubishi Electric and partners are collaborating to develop scalable quantum information processing by connecting multiple quantum devices in practical environments, addressing limitations in single quantum computers.

Recommended read:
References :
  • The Quantum Insider: VTT Technical Research Centre of Finland and IQM Quantum Computers, one of the global leaders in superconducting quantum computers, have completed and launched Europe’s first 50-qubit superconducting quantum computer, now open to researchers and companies through the VTT QX quantum computing service.
  • The Quantum Insider: Mitsubishi Electric, Quantinuum K.K., and Partners Pursue Multi-Device Connectivity Research for Scalable Quantum Computing
  • The Quantum Insider: AIST, ORCA Computing Sign MoU to Strengthen Collaboration For The Industrialization of Scalable Photonic Quantum Computing

@medium.com // 89d
Recent advancements in quantum computing highlight the critical mathematical foundations that underpin this emerging technology. Researchers are delving into the intricacies of quantum bits (qubits), exploring how they represent information, which is fundamentally different from classical bits, with techniques using packages like Qiskit. The mathematical framework describes qubits as existing in a superposition of states, a concept visualized through the Bloch sphere, and utilizes complex coefficients to represent the probabilities of measuring those states. Furthermore, the study of multi-qubit systems reveals phenomena such as entanglement, a critical resource that facilitates quantum computation and secure communication.

Quantum cryptography is another area benefiting from quantum mechanics, using superposition and entanglement for theoretically unbreakable security. Quantum random bit generation is also under development, with quantum systems producing truly random numbers critical for cryptography and simulations. In a different area of quantum development, a new protocol has been demonstrated on a 54-qubit system that generates long-range entanglement, highlighting the capabilities to control and manipulate quantum states in large systems, essential for scalable error-corrected quantum computing. These advancements are set against a backdrop of intensive research into mathematical models that represent how quantum phenomena differ from classical physics.

Recommended read:
References :
  • medium.com: Quantum Bits: An in-depth exploration [hands-on Qiskit code] (Article — 2)
  • medium.com: Realising Quantum Teleportation [Maths + Qiskit Code] (Article — 4)
  • quantumcomputingreport.com: Elmos and ID Quantique Collaborate to Develop World’s Smallest Quantum Random Number Generator
  • www.digitimes.com: Google and IBM push ambitious quantum roadmaps amid Jensen Huang's caution at CES