News from the AI & ML world

DeeperML - #researchers

Chris McKay@Maginative //
Google's AI research notebook, NotebookLM, has introduced a significant upgrade that enhances collaboration by allowing users to publicly share their AI-powered notebooks with a simple link. This new feature, called Public Notebooks, enables users to share their research summaries and audio overviews generated by AI with anyone, without requiring sign-in or permissions. This move aims to transform NotebookLM from a personal research tool into an interactive, AI-powered knowledge hub, facilitating easier distribution of study guides, project briefs, and more.

The public sharing feature provides viewers with the ability to interact with AI-generated content like FAQs and overviews, as well as ask questions in chat. However, they cannot edit the original sources, ensuring the preservation of ownership while enabling discovery. To share a notebook, users can click the "Share" button, switch the setting to "Anyone with the link," and copy the link. This streamlined process is similar to sharing Google Docs, making it intuitive and accessible for users.

This upgrade is particularly beneficial for educators, startups, and nonprofits. Teachers can share curated curriculum summaries, startups can distribute product manuals, and nonprofits can publish donor briefing documents without the need to build a dedicated website. By enabling easier sharing of AI-generated notes and audio overviews, Google is demonstrating how generative AI can be integrated into everyday productivity workflows, making NotebookLM a more grounded tool for sense-making of complex material.

Recommended read:
References :
  • Maginative: Google’s NotebookLM Now Lets You Share AI-Powered Notebooks With a Link
  • The Official Google Blog: NotebookLM is adding a new way to share your own notebooks publicly.
  • PCMag Middle East ai: Google Makes It Easier to Share Your NotebookLM Docs, AI Podcasts
  • AI & Machine Learning: How Alpian is redefining private banking for the digital age with gen AI
  • venturebeat.com: Google quietly launches AI Edge Gallery, letting Android phones run AI without the cloud
  • TestingCatalog: Google’s Kingfall model briefly goes live on AI Studio before lockdown
  • shellypalmer.com: NotebookLM, one of Google's most viral AI products, just got a really useful upgrade: users can now publicly share notebooks with a link.

@techhq.com //
References: TechHQ , futurumgroup.com
Dell Technologies and NVIDIA are collaborating to construct the "Doudna" supercomputer for the U.S. Department of Energy (DOE). Named after Nobel laureate Jennifer Doudna, the system will be housed at the Lawrence Berkeley National Laboratory's National Energy Research Scientific Computing Center (NERSC) and is slated for deployment in 2026. This supercomputer aims to revolutionize scientific research by merging artificial intelligence (AI) with simulation capabilities, empowering over 11,000 researchers across various disciplines, including fusion energy, astronomy, and life sciences. The project represents a significant federal investment in high-performance computing (HPC) infrastructure, designed to maintain U.S. leadership in AI and scientific discovery.

The Doudna supercomputer, also known as NERSC-10, promises a tenfold increase in scientific output compared to its predecessor, Perlmutter, while only consuming two to three times the power. This translates to a three-to-five-fold improvement in performance per watt, achieved through advanced chip design and system-level efficiencies. The system integrates high-performance CPUs with coherent GPUs, enabling direct data access and sharing across processors, addressing traditional bottlenecks in scientific computing workflows. Doudna will also be connected to DOE experimental and observational facilities through the Energy Sciences Network (ESnet), facilitating seamless data streaming and near real-time analysis.

According to NVIDIA CEO Jensen Huang, Doudna is designed to accelerate scientific workflows and act as a "time machine for science," compressing years of discovery into days. Energy Secretary Chris Wright sees it as essential infrastructure for maintaining American technological leadership in AI and quantum computing. The supercomputer emphasizes coherent memory access between CPUs and GPUs, enabling data sharing in heterogeneous processors, which is a requirement for modern AI-accelerated scientific workflows. The Nvidia Vera Rubin supercomputer architecture introduces hardware-level optimizations designed specifically for the convergence of simulation, machine learning, and quantum algorithm development.

Recommended read:
References :
  • TechHQ: Nvidia Vera Rubin supercomputer to serve researchers in fusion energy, astronomy, and life sciences. Dell’s system targets 10x performance, 3-5x better power efficiency, to be deployed in 2026.
  • futurumgroup.com: Can Dell and NVIDIA’s AI Factory 2.0 Solve Enterprise-Scale AI Infrastructure Gaps?

@blogs.nvidia.com //
NVIDIA is significantly expanding its presence in the AI ecosystem through strategic partnerships and the introduction of innovative technologies. At Computex 2025, CEO Jensen Huang unveiled NVLink Fusion, a groundbreaking program that opens NVIDIA's high-speed NVLink interconnect technology to non-NVIDIA CPUs and accelerators. This move is poised to solidify NVIDIA's role as a central component in AI infrastructure, even in systems utilizing silicon from other vendors, including MediaTek, Marvell, Fujitsu, and Qualcomm. This initiative represents a major shift from NVIDIA's previously exclusive use of NVLink and is intended to enable the creation of semi-custom AI infrastructures tailored to specific needs.

This strategy ensures that while customers may incorporate rival chips, the underlying AI ecosystem remains firmly rooted in NVIDIA's technologies, including its GPUs, interconnects, and software stack. NVIDIA is also teaming up with Foxconn to construct an AI supercomputer in Taiwan, further demonstrating its commitment to advancing AI capabilities in the region. The collaboration will see Foxconn subsidiary, Big Innovation Company, delivering the infrastructure for 10,000 NVIDIA Blackwell GPUs. This substantial investment aims to empower Taiwanese organizations by providing the necessary AI cloud computing resources to facilitate the adoption of AI technologies across both private and public sectors.

In addition to hardware advancements, NVIDIA is also investing in quantum computing research. Taiwan's National Center for High-Performance Computing (NCHC) is deploying a new NVIDIA-powered AI supercomputer designed to support climate science, quantum research, and the development of large language models. Built by ASUS, this supercomputer will feature NVIDIA HGX H200 systems with over 1,700 GPUs, along with other advanced NVIDIA technologies. This initiative aligns with NVIDIA's broader strategy to drive breakthroughs in sovereign AI, quantum computing, and advanced scientific computation, positioning Taiwan as a key hub for AI development and technological autonomy.

Recommended read:
References :
  • NVIDIA Newsroom: NVIDIA-Powered Supercomputer to Enable Quantum Leap for Taiwan Research
  • Maginative: NVIDIA Opens Its NVLink Ecosystem to Rivals in Bid to Further Cement AI Dominance
  • www.tomshardware.com: Nvidia teams up with Foxconn to build an AI supercomputer in Taiwan
  • NVIDIA Newsroom: Quantum computing promises to shorten the path to solving some of the world’s biggest computational challenges, from scaling in-silico drug design to optimizing otherwise impossibly complex, large-scale logistics problems.
  • blogs.nvidia.com: NVIDIA Grows Quantum Computing Ecosystem With Taiwan Manufacturers and Supercomputing
  • quantumcomputingreport.com: NVIDIA and AIST Launch ABCI-Q Supercomputer for Hybrid Quantum-AI Research
  • AI News | VentureBeat: Nvidia powers world’s largest quantum research supercomputer
  • the-decoder.com: NVIDIA is opening up its chip ecosystem
  • : NVIDIA and AIST Launch ABCI-Q Supercomputer for Hybrid Quantum-AI Research
  • techvro.com: NVLink Fusion: Nvidia To Sell Hybrid Systems Using AI Chips
  • AIwire: Nvidia’s Global Expansion: AI Factories, NVLink Fusion, AI Supercomputers, and More

Dr. Thad@The Official Google Blog //
Google has introduced DolphinGemma, a new AI model designed to decipher dolphin communication. Developed in collaboration with the Wild Dolphin Project (WDP) and researchers at Georgia Tech, DolphinGemma aims to analyze and generate dolphin vocalizations, potentially paving the way for interspecies communication. For decades, scientists have attempted to understand the complex whistles and clicks dolphins use. With DolphinGemma, researchers hope to decode these sounds and gain insights into the structure and patterns of dolphin communication. The ultimate goal is to determine if dolphins possess a language and eventually, potentially communicate with them.

The foundation for DolphinGemma's development lies in the WDP's extensive data collection of recordings of sounds from bottlenose dolphins. The WDP has been studying a specific community of Atlantic spotted dolphins since 1985, using a non-invasive approach. Over decades, the WDP has created video and audio recordings of dolphins, along with correlating notes on their behaviors. DolphinGemma uses Google's SoundStream tokenizer to identify patterns and sequences. By analyzing this massive dataset, DolphinGemma can identify patterns and uncover potential meanings within the dolphins' natural communication, which previously required immense human effort.

Field testing of DolphinGemma is scheduled to begin this summer. During field research, sounds can be recorded on Pixel phones and analyzed with DolphinGemma. The model can also predict the subsequent sounds a dolphin may make, much like how large language models for human language predict the next word or token in a sentence. While understanding dolphin communication is the initial focus, the long-term vision includes establishing a shared vocabulary for interactive communication, utilizing synthetic sounds that dolphins could learn, akin to teaching them a new language. The WDP is also working with the Georgia Institute of Technology to teach dolphins a simple, shared vocabulary, using an underwater computer system called CHAT.

Recommended read:
References :

@www.analyticsvidhya.com //
Google Cloud Next '25 saw a major push into agentic AI with the unveiling of several key technologies and initiatives aimed at fostering the development and interoperability of AI agents. Google announced the Agent Development Kit (ADK), an open-source framework designed to simplify the creation and management of AI agents. The ADK, written in Python, allows developers to build both simple and complex multi-agent systems. Complementing the ADK is Agent Garden, a collection of pre-built agent patterns and components to accelerate development. Additionally, Google introduced Agent Engine, a fully managed runtime in Vertex AI, enabling secure and reliable deployment of custom agents at a global scale.

Google is also addressing the challenge of AI agent interoperability with the introduction of the Agent2Agent (A2A) protocol. A2A is an open standard intended to provide a common language for AI agents to communicate, regardless of the frameworks or vendors used to build them. This protocol allows agents to collaborate and share information securely, streamlining workflows and reducing integration costs. The A2A initiative has garnered support from over 50 industry leaders, including Atlassian, Box, Cohere, Intuit, and Salesforce, signaling a collaborative effort to advance multi-agent systems.

These advancements are integrated within Vertex AI, Google's comprehensive platform for managing models, data, and agents. Enhancements to Vertex AI include supporting Model Context Protocol (MCP) to ensure secure data connections for agents. In addition to software advancements, Google unveiled its seventh-generation Tensor Processing Unit (TPU), named Ironwood, designed to optimize AI inferencing. Ironwood offers significantly increased compute capacity and high-bandwidth memory, further empowering AI applications within the Google Cloud ecosystem.

Recommended read:
References :
  • AI & Machine Learning: Vertex AI offers new ways to build and manage multi-agent systems
  • Thomas Roccia :verified:: Google just dropped A2A, a new protocol for agents to talk to each other.
  • Ken Yeung: Google Pushes Agent Interoperability With New Dev Kit and Agent2Agent Standard
  • techstrong.ai: Google Unfurls Raft of AI Agent Technologies at Google Cloud Next ’25
  • Analytics Vidhya: Google's new Agent2Agent (A2A) protocol enables seamless AI agent collaboration across diverse frameworks, enhancing enterprise productivity and automating complex workflows.
  • Data Analytics: Next 25 developer keynote: From prompt, to agent, to work, to fun
  • www.analyticsvidhya.com: Agent-to-Agent Protocol: Helping AI Agents Work Together Across Systems
  • TestingCatalog: Google's new Agent2Agent (A2A) protocol enables seamless AI agent collaboration across diverse frameworks, enhancing enterprise productivity and automating complex workflows.
  • www.aiwire.net: Google Cloud Preps for Agentic AI Era with ‘Ironwood’ TPU, New Models and Software
  • PCMag Middle East ai: At Google Cloud Next, We're Off to See the AI Agents (And Huge Performance Gains)
  • Ken Yeung: Google’s Customer Engagement Suite Gets Human-Like AI Agents with Voice, Emotion, and Video Support
  • bdtechtalks.com: News article on how Google’s Agent2Agent can boost AI productivity.
  • TheSequence: The Sequence Radar #531: A2A is the New Hot Protocol in Agent Land
  • www.infoq.com: News article on Google releasing the Agent2Agent protocol for agentic collaboration.

Ken Yeung@Ken Yeung //
Google has launched a new feature called "Discover Sources" for NotebookLM, its AI-powered tool designed to organize and analyze information. Rolling out to all users starting April 2, 2025, the new feature automatically curates relevant websites on a specified topic, recommending up to ten sources accompanied by AI-generated summaries. This enhancement streamlines research by allowing users to quickly surface relevant content from the internet.

NotebookLM, initially launched in 2023 as an AI-powered alternative to Evernote and Microsoft OneNote, previously relied on manual uploads of documents, articles, and notes. "Discover Sources" automates the process of pulling in information from the internet with a single click. The curated sources remain accessible within NotebookLM notebooks, allowing users to leverage them within Briefing Docs, FAQs, and Audio Overviews without repeatedly scouring the internet. This enhancement highlights the growing trend of AI-driven research tools shaping how we work and learn.

Recommended read:
References :
  • The Official Google Blog: New in NotebookLM: Discover sources from around the web
  • Ken Yeung: Google’s NotebookLM Now Helps You Find Web Sources with ‘Discover’
  • TestingCatalog: Discover sources feature rolls out in Google’s NotebookLM for all users
  • Google Workspace Updates: Introducing updates to sources for NotebookLM and NotebookLM Plus
  • Shelly Palmer: Google launched "Discover Sources," a new feature that lets NotebookLM search the web directly within your notebooks—and I'm already addicted. This feature helps you find relevant web content by simply describing your topic, after which NotebookLM searches and summarizes the most relevant sources, which you can add to your notebook with one click.
  • Google Workspace Updates: NotebookLM and the Gemini app are now Core Services with enterprise-grade data protection for all education customers
  • www.techradar.com: I tried the latest update to NotebookLM and it’s never been easier to make an AI podcast out of other people’s articles, for better or worse
  • TestingCatalog: Google plans new Gemini model launch ahead of Cloud Next event
  • www.tomsguide.com: I just tested NotebookLM's new features — and this AI tool blows me away
  • PCMag Middle East ai: NotebookLM lets you upload docs for Google's AI to organize into an easily digestible format, but now you can ask Gemini to find those docs for you.
  • The Official Google Blog: 3 ways to level up your studying with NotebookLM
  • THE DECODER: This article discusses Google's Gemini 2.5 Pro AI model, its performance, and the expansion of access.
  • the-decoder.com: The Decoder reports Google adds web search capabilities to NotebookLM.

Ken Yeung@Ken Yeung //
Google's NotebookLM has been enhanced with a new "Discover Sources" feature, designed to streamline research and note-taking. This allows users to search the web directly within their notebooks, simplifying the process of finding relevant content. Describing the topic you are looking for will cause NotebookLM to search and summarize the most relevant sources and you can then add them to your notebook with a single click, eliminating the need to switch between browser tabs or manually upload sources.

This new tool recommends up to ten relevant web sources, each accompanied by an annotated summary explaining its importance to the topic. This feature is now available to all NotebookLM users, although the full rollout may take up to a week. "Discover Sources" leverages AI to surface relevant websites and automate the process of pulling information from the internet. NotebookLM users can utilize Briefing Docs, FAQs, and Audio Overviews using these new sources.

Recommended read:
References :
  • Shelly Palmer: NotebookLM’s New “Discover Sources†is Making Me Smile
  • Ken Yeung: Google’s NotebookLM Now Helps You Find Web Sources with ‘Discover’
  • TestingCatalog: Discover sources feature rolls out in Google’s NotebookLM for all users
  • www.tomsguide.com: I just tested NotebookLM's new features — and this AI tool blows me away
  • the-decoder.com: Google adds web search capabilities to NotebookLM

Jonathan Kemper@THE DECODER //
Meta is developing MoCha (Movie Character Animator), an AI system designed to generate complete character animations. MoCha takes natural language prompts describing the character, scene, and actions, along with a speech audio clip, and outputs a cinematic-quality video. This end-to-end model synchronizes speech with facial movements, generates full-body gestures, and maintains character consistency, even managing turn-based dialogue between multiple speakers. The system introduces a "Speech-Video Window Attention" mechanism to solve challenges in AI video generation, improving lip sync accuracy by limiting each frame's access to a specific window of audio data and adding tokens to create smoother transitions.

MoCha runs on a diffusion transformer model with 30 billion parameters and produces HD video clips around five seconds long at 24 frames per second. For scenes with multiple characters, the team developed a streamlined prompt system, allowing users to define characters once and reference them with simple tags throughout different scenes. Meta’s AI research head, Joelle Pineau, announced her resignation which is effective at the end of May, vacating a high-profile position amid intense competition in AI development.

Recommended read:
References :

Maria Deutscher@AI ? SiliconANGLE //
Isomorphic Labs, an Alphabet spinout focused on AI-driven drug design, has secured $600 million in its first external funding round. The investment, led by Thrive Capital with participation from Alphabet and GV, will fuel the advancement of Isomorphic Labs' AI drug design engine and therapeutic programs. The company aims to leverage artificial intelligence, including its AlphaFold technology, to revolutionize drug discovery across various therapeutic areas, including oncology and immunology. This funding is expected to accelerate research and development efforts, as well as facilitate the expansion of Isomorphic Labs' team with top-tier talent.

Isomorphic Labs, founded in 2021 by Sir Demis Hassabis, seeks to reimagine and accelerate drug discovery by applying AI. Its AI-powered engine streamlines the design of small molecules with therapeutic applications and can predict the effectiveness of a small molecule's attachment to a protein. The company's software also eases other aspects of the drug development workflow. Isomorphic Labs has already established collaborations with pharmaceutical companies like Eli Lilly and Novartis, and the new funding will support the progression of its own drug programs into clinical development.

Recommended read:
References :
  • www.genengnews.com: DeepMind Spinout Isomorphic Labs Raises $600M Toward AI Drug Design
  • Crunchbase News: Alphabet-Backed Isomorphic Labs Raises $600M For AI Drug Development
  • Maginative: Isomorphic Labs Secures $600M to Accelerate AI-Powered Drug Discovery
  • AI ? SiliconANGLE: Alphabet spinout Isomorphic Labs raises $600M for its AI drug design engine
  • Silicon Canals: London-based Isomorphic Labs, an AI-focused drug design and development company, has raised $600M (nearly €555.44M) in its first external funding round.
  • GZERO Media: Meet Isomorphic Labs, the Google spinoff that aims to cure you
  • www.genengnews.com: DeepMind Spinout Isomorphic Labs Raises $600M Toward AI Drug Design
  • Maginative: AI-first drug design company Isomorphic Labs has raised $600 million in its first external funding round, led by Thrive Capital, to advance its AI drug design engine and therapeutic programs.
  • eWEEK: AI drug design and development company Isomorphic Labs announced on Monday that it raised $600 million in its first external funding round. Thrive Capital led the financing round with participation from Google’s GV, and follow-on capital from Alphabet, its parent company and an existing investor. Isomorphic Labs described plans to use the funds to drive