Chris McKay@Maginative
//
Google's AI research notebook, NotebookLM, has introduced a significant upgrade that enhances collaboration by allowing users to publicly share their AI-powered notebooks with a simple link. This new feature, called Public Notebooks, enables users to share their research summaries and audio overviews generated by AI with anyone, without requiring sign-in or permissions. This move aims to transform NotebookLM from a personal research tool into an interactive, AI-powered knowledge hub, facilitating easier distribution of study guides, project briefs, and more.
The public sharing feature provides viewers with the ability to interact with AI-generated content like FAQs and overviews, as well as ask questions in chat. However, they cannot edit the original sources, ensuring the preservation of ownership while enabling discovery. To share a notebook, users can click the "Share" button, switch the setting to "Anyone with the link," and copy the link. This streamlined process is similar to sharing Google Docs, making it intuitive and accessible for users. This upgrade is particularly beneficial for educators, startups, and nonprofits. Teachers can share curated curriculum summaries, startups can distribute product manuals, and nonprofits can publish donor briefing documents without the need to build a dedicated website. By enabling easier sharing of AI-generated notes and audio overviews, Google is demonstrating how generative AI can be integrated into everyday productivity workflows, making NotebookLM a more grounded tool for sense-making of complex material. Recommended read:
References :
@techhq.com
//
References:
TechHQ
, futurumgroup.com
Dell Technologies and NVIDIA are collaborating to construct the "Doudna" supercomputer for the U.S. Department of Energy (DOE). Named after Nobel laureate Jennifer Doudna, the system will be housed at the Lawrence Berkeley National Laboratory's National Energy Research Scientific Computing Center (NERSC) and is slated for deployment in 2026. This supercomputer aims to revolutionize scientific research by merging artificial intelligence (AI) with simulation capabilities, empowering over 11,000 researchers across various disciplines, including fusion energy, astronomy, and life sciences. The project represents a significant federal investment in high-performance computing (HPC) infrastructure, designed to maintain U.S. leadership in AI and scientific discovery.
The Doudna supercomputer, also known as NERSC-10, promises a tenfold increase in scientific output compared to its predecessor, Perlmutter, while only consuming two to three times the power. This translates to a three-to-five-fold improvement in performance per watt, achieved through advanced chip design and system-level efficiencies. The system integrates high-performance CPUs with coherent GPUs, enabling direct data access and sharing across processors, addressing traditional bottlenecks in scientific computing workflows. Doudna will also be connected to DOE experimental and observational facilities through the Energy Sciences Network (ESnet), facilitating seamless data streaming and near real-time analysis. According to NVIDIA CEO Jensen Huang, Doudna is designed to accelerate scientific workflows and act as a "time machine for science," compressing years of discovery into days. Energy Secretary Chris Wright sees it as essential infrastructure for maintaining American technological leadership in AI and quantum computing. The supercomputer emphasizes coherent memory access between CPUs and GPUs, enabling data sharing in heterogeneous processors, which is a requirement for modern AI-accelerated scientific workflows. The Nvidia Vera Rubin supercomputer architecture introduces hardware-level optimizations designed specifically for the convergence of simulation, machine learning, and quantum algorithm development. Recommended read:
References :
@blogs.nvidia.com
//
NVIDIA is significantly expanding its presence in the AI ecosystem through strategic partnerships and the introduction of innovative technologies. At Computex 2025, CEO Jensen Huang unveiled NVLink Fusion, a groundbreaking program that opens NVIDIA's high-speed NVLink interconnect technology to non-NVIDIA CPUs and accelerators. This move is poised to solidify NVIDIA's role as a central component in AI infrastructure, even in systems utilizing silicon from other vendors, including MediaTek, Marvell, Fujitsu, and Qualcomm. This initiative represents a major shift from NVIDIA's previously exclusive use of NVLink and is intended to enable the creation of semi-custom AI infrastructures tailored to specific needs.
This strategy ensures that while customers may incorporate rival chips, the underlying AI ecosystem remains firmly rooted in NVIDIA's technologies, including its GPUs, interconnects, and software stack. NVIDIA is also teaming up with Foxconn to construct an AI supercomputer in Taiwan, further demonstrating its commitment to advancing AI capabilities in the region. The collaboration will see Foxconn subsidiary, Big Innovation Company, delivering the infrastructure for 10,000 NVIDIA Blackwell GPUs. This substantial investment aims to empower Taiwanese organizations by providing the necessary AI cloud computing resources to facilitate the adoption of AI technologies across both private and public sectors. In addition to hardware advancements, NVIDIA is also investing in quantum computing research. Taiwan's National Center for High-Performance Computing (NCHC) is deploying a new NVIDIA-powered AI supercomputer designed to support climate science, quantum research, and the development of large language models. Built by ASUS, this supercomputer will feature NVIDIA HGX H200 systems with over 1,700 GPUs, along with other advanced NVIDIA technologies. This initiative aligns with NVIDIA's broader strategy to drive breakthroughs in sovereign AI, quantum computing, and advanced scientific computation, positioning Taiwan as a key hub for AI development and technological autonomy. Recommended read:
References :
Dr. Thad@The Official Google Blog
//
Google has introduced DolphinGemma, a new AI model designed to decipher dolphin communication. Developed in collaboration with the Wild Dolphin Project (WDP) and researchers at Georgia Tech, DolphinGemma aims to analyze and generate dolphin vocalizations, potentially paving the way for interspecies communication. For decades, scientists have attempted to understand the complex whistles and clicks dolphins use. With DolphinGemma, researchers hope to decode these sounds and gain insights into the structure and patterns of dolphin communication. The ultimate goal is to determine if dolphins possess a language and eventually, potentially communicate with them.
The foundation for DolphinGemma's development lies in the WDP's extensive data collection of recordings of sounds from bottlenose dolphins. The WDP has been studying a specific community of Atlantic spotted dolphins since 1985, using a non-invasive approach. Over decades, the WDP has created video and audio recordings of dolphins, along with correlating notes on their behaviors. DolphinGemma uses Google's SoundStream tokenizer to identify patterns and sequences. By analyzing this massive dataset, DolphinGemma can identify patterns and uncover potential meanings within the dolphins' natural communication, which previously required immense human effort. Field testing of DolphinGemma is scheduled to begin this summer. During field research, sounds can be recorded on Pixel phones and analyzed with DolphinGemma. The model can also predict the subsequent sounds a dolphin may make, much like how large language models for human language predict the next word or token in a sentence. While understanding dolphin communication is the initial focus, the long-term vision includes establishing a shared vocabulary for interactive communication, utilizing synthetic sounds that dolphins could learn, akin to teaching them a new language. The WDP is also working with the Georgia Institute of Technology to teach dolphins a simple, shared vocabulary, using an underwater computer system called CHAT. Recommended read:
References :
@www.analyticsvidhya.com
//
Google Cloud Next '25 saw a major push into agentic AI with the unveiling of several key technologies and initiatives aimed at fostering the development and interoperability of AI agents. Google announced the Agent Development Kit (ADK), an open-source framework designed to simplify the creation and management of AI agents. The ADK, written in Python, allows developers to build both simple and complex multi-agent systems. Complementing the ADK is Agent Garden, a collection of pre-built agent patterns and components to accelerate development. Additionally, Google introduced Agent Engine, a fully managed runtime in Vertex AI, enabling secure and reliable deployment of custom agents at a global scale.
Google is also addressing the challenge of AI agent interoperability with the introduction of the Agent2Agent (A2A) protocol. A2A is an open standard intended to provide a common language for AI agents to communicate, regardless of the frameworks or vendors used to build them. This protocol allows agents to collaborate and share information securely, streamlining workflows and reducing integration costs. The A2A initiative has garnered support from over 50 industry leaders, including Atlassian, Box, Cohere, Intuit, and Salesforce, signaling a collaborative effort to advance multi-agent systems. These advancements are integrated within Vertex AI, Google's comprehensive platform for managing models, data, and agents. Enhancements to Vertex AI include supporting Model Context Protocol (MCP) to ensure secure data connections for agents. In addition to software advancements, Google unveiled its seventh-generation Tensor Processing Unit (TPU), named Ironwood, designed to optimize AI inferencing. Ironwood offers significantly increased compute capacity and high-bandwidth memory, further empowering AI applications within the Google Cloud ecosystem. Recommended read:
References :
Ken Yeung@Ken Yeung
//
Google has launched a new feature called "Discover Sources" for NotebookLM, its AI-powered tool designed to organize and analyze information. Rolling out to all users starting April 2, 2025, the new feature automatically curates relevant websites on a specified topic, recommending up to ten sources accompanied by AI-generated summaries. This enhancement streamlines research by allowing users to quickly surface relevant content from the internet.
NotebookLM, initially launched in 2023 as an AI-powered alternative to Evernote and Microsoft OneNote, previously relied on manual uploads of documents, articles, and notes. "Discover Sources" automates the process of pulling in information from the internet with a single click. The curated sources remain accessible within NotebookLM notebooks, allowing users to leverage them within Briefing Docs, FAQs, and Audio Overviews without repeatedly scouring the internet. This enhancement highlights the growing trend of AI-driven research tools shaping how we work and learn. Recommended read:
References :
Ken Yeung@Ken Yeung
//
Google's NotebookLM has been enhanced with a new "Discover Sources" feature, designed to streamline research and note-taking. This allows users to search the web directly within their notebooks, simplifying the process of finding relevant content. Describing the topic you are looking for will cause NotebookLM to search and summarize the most relevant sources and you can then add them to your notebook with a single click, eliminating the need to switch between browser tabs or manually upload sources.
This new tool recommends up to ten relevant web sources, each accompanied by an annotated summary explaining its importance to the topic. This feature is now available to all NotebookLM users, although the full rollout may take up to a week. "Discover Sources" leverages AI to surface relevant websites and automate the process of pulling information from the internet. NotebookLM users can utilize Briefing Docs, FAQs, and Audio Overviews using these new sources. Recommended read:
References :
Jonathan Kemper@THE DECODER
//
References:
Analytics India Magazine
, THE DECODER
Meta is developing MoCha (Movie Character Animator), an AI system designed to generate complete character animations. MoCha takes natural language prompts describing the character, scene, and actions, along with a speech audio clip, and outputs a cinematic-quality video. This end-to-end model synchronizes speech with facial movements, generates full-body gestures, and maintains character consistency, even managing turn-based dialogue between multiple speakers. The system introduces a "Speech-Video Window Attention" mechanism to solve challenges in AI video generation, improving lip sync accuracy by limiting each frame's access to a specific window of audio data and adding tokens to create smoother transitions.
MoCha runs on a diffusion transformer model with 30 billion parameters and produces HD video clips around five seconds long at 24 frames per second. For scenes with multiple characters, the team developed a streamlined prompt system, allowing users to define characters once and reference them with simple tags throughout different scenes. Meta’s AI research head, Joelle Pineau, announced her resignation which is effective at the end of May, vacating a high-profile position amid intense competition in AI development. Recommended read:
References :
Maria Deutscher@AI ? SiliconANGLE
//
Isomorphic Labs, an Alphabet spinout focused on AI-driven drug design, has secured $600 million in its first external funding round. The investment, led by Thrive Capital with participation from Alphabet and GV, will fuel the advancement of Isomorphic Labs' AI drug design engine and therapeutic programs. The company aims to leverage artificial intelligence, including its AlphaFold technology, to revolutionize drug discovery across various therapeutic areas, including oncology and immunology. This funding is expected to accelerate research and development efforts, as well as facilitate the expansion of Isomorphic Labs' team with top-tier talent.
Isomorphic Labs, founded in 2021 by Sir Demis Hassabis, seeks to reimagine and accelerate drug discovery by applying AI. Its AI-powered engine streamlines the design of small molecules with therapeutic applications and can predict the effectiveness of a small molecule's attachment to a protein. The company's software also eases other aspects of the drug development workflow. Isomorphic Labs has already established collaborations with pharmaceutical companies like Eli Lilly and Novartis, and the new funding will support the progression of its own drug programs into clinical development. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |