News from the AI & ML world

DeeperML - #googleai

@www.marktechpost.com //
Google has unveiled a new AI model designed to forecast tropical cyclones with improved accuracy. Developed through a collaboration between Google Research and DeepMind, the model is accessible via a newly launched website called Weather Lab. The AI aims to predict both the path and intensity of cyclones days in advance, overcoming limitations present in traditional physics-based weather prediction models. Google claims its algorithm achieves "state-of-the-art accuracy" in forecasting cyclone track and intensity, as well as details like formation, size, and shape.

The AI model was trained using two extensive datasets: one describing the characteristics of nearly 5,000 cyclones from the past 45 years, and another containing millions of weather observations. Internal testing demonstrated the algorithm's ability to accurately predict the paths of recent cyclones, in some cases up to a week in advance. The model can generate 50 possible scenarios, extending forecast capabilities up to 15 days.

This breakthrough has already seen adoption by the U.S. National Hurricane Center, which is now using these experimental AI predictions alongside traditional forecasting models in its operational workflow. Google's AI's ability to forecast up to 15 days in advance marks a significant improvement over current models, which typically provide 3-5 day forecasts. Google made the AI accessible through a new website called Weather Lab. The model is available alongside two years' worth of historical forecasts, as well as data from traditional physics-based weather prediction algorithms. According to Google, this could help weather agencies and emergency service experts better anticipate a cyclone’s path and intensity.

Recommended read:
References :
  • siliconangle.com: Google LLC today detailed an artificial intelligence model that can forecast the path and intensity of tropical cyclones days in advance.
  • AI News | VentureBeat: Google DeepMind just changed hurricane forecasting forever with new AI model
  • MarkTechPost: Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty Assessment
  • Maginative: Google's AI Can Now Predict Hurricane Paths 15 Days Out — and the Hurricane Center Is Using It
  • SiliconANGLE: Google develops AI model for forecasting tropical cyclones. According to the company, the algorithm was developed through a collaboration between its Google Research and DeepMind units. It’s available through a newly launched website called Weather Lab.
  • The Official Google Blog: Weather Lab is an interactive website for sharing Google’s AI weather models.
  • www.engadget.com: Google DeepMind is sharing its AI forecasts with the National Weather Service
  • www.producthunt.com: Predicting cyclone paths & intensity 15 days ahead |
  • the-decoder.com: Google Deepmind launches Weather Lab to test AI models for tropical cyclone forecasting
  • AIwire: Google DeepMind Launches Interactive AI That Lets You Explore Storm Forecasts
  • www.aiwire.net: Google DeepMind and Google Research are launching Weather Lab - a new AI-driven platform designed specifically to improve forecasts for tropical cyclone formation, intensity, and trajectory.

Michael Nuñez@AI News | VentureBeat //
Google has recently rolled out its latest Gemini 2.5 Flash and Pro models on Vertex AI, bringing advanced AI capabilities to enterprises. The release includes the general availability of Gemini 2.5 Flash and Pro, along with a new Flash-Lite model available for testing. These updates aim to provide organizations with the tools needed to build sophisticated and efficient AI solutions.

The Gemini 2.5 Flash model is designed for speed and efficiency, making it suitable for tasks such as large-scale summarization, responsive chat applications, and data extraction. Gemini 2.5 Pro handles complex reasoning, advanced code generation, and multimodal understanding. Additionally, the new Flash-Lite model offers cost-efficient performance for high-volume tasks. These models are now production-ready within Vertex AI, offering the stability and scalability needed for mission-critical applications.

Google CEO Sundar Pichai has highlighted the improved performance of the Gemini 2.5 Pro update, particularly in coding, reasoning, science, and math. The update also incorporates feedback to improve the style and structure of responses. The company is also offering Supervised Fine-Tuning (SFT) for Gemini 2.5 Flash, enabling enterprises to tailor the model to their unique data and needs. A new updated Live API with native audio is also in public preview, designed to streamline the development of complex, real-time audio AI systems.

Recommended read:
References :
  • AI & Machine Learning: Gemini 2.5 model hardening.
  • deepmind.google: Advancing Gemini's security safeguards.
  • AI GPT Journal: How to Use Gemini Live for Work, Life, and Everything in Between
  • LearnAI: Gemini 2.5: Updates to our family of thinking models. Today we are excited to share updates across the board to our Gemini 2.5 model family.
  • AI News | VentureBeat: Google launches production-ready Gemini 2.5 Pro and Flash AI models for enterprises while introducing cost-efficient Flash-Lite to challenge OpenAI's market dominance.
  • www.laptopmag.com: Google brings Gemini's latest 2.5 Flash and Pro models to audiences, and makes Flash-Lite available for testing.
  • thezvi.substack.com: Google recently came out with Gemini-2.5-0605, to replace Gemini-2.5-0506, because I mean at this point it has to be the companies intentionally fucking with us, right?
  • thezvi.wordpress.com: Google recently came out with Gemini-2.5-0605, to replace Gemini-2.5-0506.
  • AI & Machine Learning: The momentum of the Gemini 2.5 era continues to build.
  • learn.aisingapore.org: This post summarizes the updates to Google Gemini 2.5 models, including the release of Pro and Flash models, and the availability of the Flash-Lite model.
  • TestingCatalog: Google Gemini 2.5 out of preview across AI Studio, Vertex AI, and Gemini app
  • www.techradar.com: Google Gemini’s super-fast Flash-Lite 2.5 model is out now - here’s why you should switch today
  • www.tomsguide.com: Google Gemini 2.5 models are now generally available for developers and researchers.

Unknown (noreply@blogger.com)@Google Workspace Updates //
Google is significantly expanding its AI capabilities across its platforms, impacting software development, advertising, and search functionalities. A major upgrade to Gemini Code Assist now features the Gemini 2.5 Pro model. This enhances the code assistance tool with a 1 million-token context window and custom commands, improving its capacity for more comprehensive code generation, deeper refactoring, and more thorough pull-request reviews. This upgrade allows developers to tailor the assistant's behavior to adhere to internal conventions and reuse prompts, significantly boosting coding task completion rates. Internal studies indicate that teams using the tool are 2.5 times more likely to finish typical coding tasks, with early community benchmarks showing higher accuracy than Copilot on context-heavy queries.

Google is also innovating in the realm of search and content delivery, testing a feature that transforms some search results into AI-generated podcasts. This experimental feature, part of Google Labs, aims to provide users with "quick, conversational audio overviews" of search queries. The feature leverages Google’s Gemini model to research search queries, analyze various third-party websites, and generate a transcript that is then read aloud by two AI-generated hosts. While this new feature offers a convenient way for users to absorb information while multitasking, it raises concerns about potentially diverting traffic away from the original sources of information.

In a groundbreaking move, an AI-generated advertisement created with Google's Veo3 aired during the NBA Finals. This marks a significant milestone in AI-driven content creation, demonstrating the potential for drastic cost reductions in advertising production. The advertisement for the event-betting platform Kalshi was created by AI Filmmaker PJ Accetturo in just three days, resulting in an estimated 95% cost reduction compared to traditional commercial production methods. This showcases a shift towards smaller, more agile creative teams leveraging AI to produce high-volume, brand-relevant content quickly and affordably while highlighting the importance of human skills such as comedy writing and directorial experience in the age of AI.

Recommended read:
References :
  • PCMag Middle East ai: Just Press Play: Google Is Turning Some Search Results Into AI Podcasts
  • www.marktechpost.com: AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%
  • TestingCatalog: Gemini Code Assist gets latest Gemini 2.5 Pro with context management and rules
  • MarkTechPost: AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%
  • www.techradar.com: Discusses the forthcoming Audio Overviews in Google Search, which leverage AI to generate podcast-style explainers of web results.
  • Google DeepMind Blog: Explore the latest Gemini 2.5 model updates with enhanced performance and accuracy: Gemini 2.5 Pro now stable, Flash generally available, and the new Flash-Lite in preview.
  • TestingCatalog: Discover the latest in AI with Google Gemini 2.5's stable release, featuring Pro and Flash models.
  • AI & Machine Learning: Gemini momentum continues with launch of 2.5 Flash-Lite and general availability of 2.5 Flash and Pro on Vertex AI
  • LearnAI: Today we are excited to share updates across the board to our Gemini 2.5 model family: Gemini 2.5 Pro is generally available and stable (no changes from the 06-05 preview) Gemini 2.5 Flash is generally available and stable (no changes from the 05-20 preview, see pricing updates below) Gemini 2.5 Flash-Lite is now available in...
  • www.laptopmag.com: Google brings Gemini's latest 2.5 Flash and Pro models to audiences, and makes Flash-Lite available for testing.
  • thezvi.wordpress.com: Google recently came out with Gemini-2.5-0605, to replace Gemini-2.5-0506, because I mean at this point it has to be the companies intentionally fucking with us, right? Google: 🔔Our updated Gemini 2.5 Pro Preview continues to excel at coding, helping you …
  • thezvi.substack.com: Google recently came out with Gemini-2.5-0605, to replace Gemini-2.5-0506.
  • learn.aisingapore.org: Gemini 2.5 model family expands
  • www.techradar.com: Google Gemini’s super-fast Flash-Lite 2.5 model is out now - here’s why you should switch today
  • learn.aisingapore.org: Today we are excited to share updates across the board to our Gemini 2.5 model family: Gemini 2.5 Pro is generally available and stable (no changes from the 06-05 preview) Gemini 2.5 Flash is generally available and stable (no changes from the 05-20 preview, see pricing updates below) Gemini 2.5 Flash-Lite is now available in...
  • global.techradar.com: Google lancia Gemini Flash-Lite 2.5: il modello IA più veloce di sempre
  • Simon Willison: Blogged too much today and had to send it all out in a newsletter - it's a pretty fun one, covering Gemini 2.5 and Mistral Small 3.2 and the fact that most LLMs will absolutely try and murder you given the chance (and a suitably contrived scenario)

Sana Hassan@MarkTechPost //
Google has recently unveiled significant advancements in artificial intelligence, showcasing its continued leadership in the tech sector. One notable development is an AI model designed for forecasting tropical cyclones. This model, developed through a collaboration between Google Research and DeepMind, is available via the newly launched Weather Lab website. It can predict the path and intensity of hurricanes up to 15 days in advance. The AI system learns from decades of historical storm data, reconstructing past weather conditions from millions of observations and utilizing a specialized database containing key information about storm tracks and intensity.

The tech giant's Weather Lab marks the first time the National Hurricane Center will use experimental AI predictions in its official forecasting workflow. The announcement comes at an opportune time, coinciding with forecasters predicting an above-average Atlantic hurricane season in 2025. This AI model can generate 50 different hurricane scenarios, offering a more comprehensive prediction range than current models, which typically provide forecasts for only 3-5 days. The AI has achieved a 1.5-day improvement in prediction accuracy, equivalent to about a decade's worth of traditional forecasting progress.

Furthermore, Google is experiencing exponential growth in AI usage. Google DeepMind noted that Google's AI usage grew 50 times in one year, reaching 500 trillion tokens per month. Logan Kilpatrick from Google DeepMind discussed Google's transformation from a "sleeping giant" to an AI powerhouse, citing superior compute infrastructure, advanced models like Gemini 2.5 Pro, and a deep talent pool in AI research.

Recommended read:
References :
  • siliconangle.com: Google develops AI model for forecasting tropical cyclones
  • Maginative: Google's AI Can Now Predict Hurricane Paths 15 Days Out — and the Hurricane Center Is Using It

@www.analyticsvidhya.com //
Google has made significant strides in the realm of artificial intelligence, introducing several key advancements. One notable development is the launch of Edge Gallery, an application designed to enable the execution of Large Language Models (LLMs) directly on smartphones. This groundbreaking app eliminates the need for cloud dependency, offering users free access to powerful AI processing capabilities on their personal devices. By shifting processing to the edge, Edge Gallery empowers users with greater control over their data and ensures enhanced privacy and security.

The company has also quietly upgraded Gemini 2.5 Pro, Google's flagship LLM, to boost its performance across coding, reasoning, and response quality. This upgrade addresses prior feedback regarding the model's tone, resulting in more structured and creative outputs. In addition to enhancing its core AI models, Google is expanding access to Project Mariner, an AI-driven browser assistant, to more Ultra subscribers. Project Mariner is designed to interact with the user’s active Chrome tabs via a dedicated extension, enabling it to query and manipulate information from any open webpage.

Furthermore, Google has introduced an open-source full-stack AI agent stack powered by Gemini 2.5 and LangGraph, designed for multi-step web search, reflection, and synthesis. This research agent stack allows AI agents to perform autonomous web searches, validate results, and refine responses, effectively mimicking a human research assistant. This initiative underscores Google's commitment to fostering innovation and collaboration within the AI community by making advanced tools and resources freely available.

Recommended read:
References :
  • Analytics Vidhya: Run LLMs Locally for Free Using Google’s Latest App!
  • Maginative: Google Just Quietly Upgrated Gemini 2.5 Pro
  • www.marktechpost.com: Google Introduces Open-Source Full-Stack AI Agent Stack Using Gemini 2.5 and LangGraph for Multi-Step Web Search, Reflection, and Synthesis
  • TestingCatalog: Google expands Project Mariner access to more Ultra subscribers

Alexey Shabanov@TestingCatalog //
Google is aggressively enhancing its Gemini platform with a suite of new features, including the integration of Imagen 4 for improved image generation, expanded Canvas capabilities, and a dedicated Enterprise mode. The Enterprise mode introduces a toggle to separate professional and personal workflows, providing business users with clearer boundaries and better data governance. Gemini is also gaining the ability to generate content from uploaded images, indicating a more creator-focused approach to multimodal generation. These additions aim to make Gemini a more comprehensive and versatile workspace for generative AI tasks.

Gemini's Canvas, a workspace for organizing and presenting ideas, is also receiving a significant upgrade. Users will soon be able to auto-generate infographics, timelines, mindmaps, full presentations, and even web pages directly within the platform. One particularly notable feature in development is the ability for users to describe their applications, prompting Gemini to automatically build UI visualizations for the underlying data. These updates demonstrate Google's strategy of bundling a broad set of creative tools for both individuals and organizations, continuously iterating on functionality to stay competitive.

The new Gemini 2.5 Pro model is out, the company claims it is superior in coding and math, and is accessible via Google AI Studio and Vertex AI. Google claims the Gemini 2.5 Pro preview beats DeepSeek R1 and Grok 3 Beta in coding performance, with performance metrics showing the new version of Gemini 2.5 Pro improved by 24 points in LMArena and by 35 points in WebDevArena, where it currently tops the leaderboard. This model is priced at $1.25 per million tokens without caching for inputs and $10 for the output price. It’s better at coding, reasoning, science + math, shows improved performance across key benchmarks.

Recommended read:
References :
  • TestingCatalog: Google to bring Canvas upgrades, image-to-video and Enterprise mode to Gemini
  • siliconangle.com: Google revamps Gemini 2.5 Pro again, claiming superiority in coding and math
  • the-decoder.com: Google rolls out new features for AI Mode and Gemini app

Emilia David@AI News | VentureBeat //
Google's Gemini 2.5 Pro is making waves in the AI landscape, with claims of superior coding performance compared to leading models like DeepSeek R1 and Grok 3 Beta. The updated Gemini 2.5 Pro, currently in preview, is touted to deliver faster and more creative responses, particularly in coding and reasoning tasks. Google highlighted improvements across key benchmarks such as AIDER Polyglot, GPQA, and HLE, noting a significant Elo score jump since the previous version. This newest iteration, referred to as Gemini 2.5 Pro Preview 06-05, builds upon the I/O edition released earlier in May, promising even better performance and enterprise-scale capabilities.

Google is also planning several enhancements to the Gemini platform. These include upgrades to Canvas, Gemini’s workspace for organizing and presenting ideas, adding the ability to auto-generate infographics, timelines, mindmaps, full presentations, and web pages. There are also plans to integrate Imagen 4, which enhances image generation capabilities, image-to-video functionality, and an Enterprise mode, which offers a dedicated toggle to separate professional and personal workflows. This Enterprise mode aims to provide business users with clearer boundaries and improved data governance within the platform.

In addition to its coding prowess, Gemini 2.5 Pro boasts native audio capabilities, enabling developers to build richer and more interactive applications. Google emphasizes its proactive approach to safety and responsibility, embedding SynthID watermarking technology in all audio outputs to ensure transparency and identifiability of AI-generated audio. Developers can explore these native audio features through the Gemini API in Google AI Studio or Vertex AI, experimenting with audio dialog and controllable speech generation. Google DeepMind is also exploring ways for AI to take over mundane email chores, with CEO Demis Hassabis envisioning an AI assistant capable of sorting, organizing, and responding to emails in a user's own voice and style.

Recommended read:
References :
  • AI News | VentureBeat: Google claims Gemini 2.5 Pro preview beats DeepSeek R1 and Grok 3 Beta in coding performance
  • learn.aisingapore.org: Gemini 2.5’s native audio capabilities
  • Kyle Wiggers ?: Google says its updated Gemini 2.5 Pro AI model is better at coding
  • www.techradar.com: Google upgrades Gemini 2.5 Pro's already formidable coding abilities
  • SiliconANGLE: Google revamps Gemini 2.5 Pro again, claiming superiority in coding and math
  • siliconangle.com: SiliconAngle reports on Google's release of an updated Gemini 2.5 Pro model, highlighting its claimed superiority in coding and math.

Tulsee Doshi@The Official Google Blog //
Google has launched an upgraded preview of Gemini 2.5 Pro, touting it as their most intelligent model yet. Building upon the version revealed in May, this updated AI demonstrates significant improvements in coding capabilities. One striking example of its advanced functionality is its ability to generate intricate images, such as a "pretty solid pelican riding a bicycle."

The model's enhanced coding proficiency is further highlighted by its ethical safeguards. When prompted to run SnitchBench, a tool designed to test the ethical boundaries of AI models, Gemini 2.5 Pro notably "tipped off both the feds and the WSJ and NYTimes." This self-awareness and alert system underscore the advancements in AI safety protocols integrated into the new model.

The rapid development and release of Gemini 2.5 Pro reflect Google's increasing confidence in its AI technology. The company emphasizes that this iteration offers substantial improvements over its predecessors, solidifying its position as a leading AI model. Developers and enthusiasts alike are encouraged to try the latest Gemini 2.5 Pro before its general release to experience its improved capabilities firsthand.

Recommended read:
References :
  • Kyle Wiggers ?: Google says its updated Gemini 2.5 Pro AI model is better at coding
  • The Official Google Blog: We’re introducing an upgraded preview of Gemini 2.5 Pro, our most intelligent model yet. Building on the version we released in May and showed at I/O, this model will be…
  • THE DECODER: Google has rolled out another update to its flagship AI model, Gemini 2.5 Pro. The latest version brings modest improvements across a range of benchmarks and maintains top positions on tests like LMArena and WebDevArena The article appeared first on .
  • www.zdnet.com: The flagship model's rapid evolution reflects Google's growing confidence in its AI offerings.
  • bsky.app: New Gemini 2.5 Pro is out - gemini-2.5-pro-preview-06-05 It made me a pretty solid pelican riding a bicycle, AND it tipped off both the feds and the WSJ and NYTimes when I tried running SnitchBench against it https://simonwillison.net/2025/Jun/5/gemini-25-pro-preview-06-05/
  • Simon Willison: New Gemini 2.5 Pro is out - gemini-2.5-pro-preview-06-05 It made me a pretty solid pelican riding a bicycle, AND it tipped off both the feds and the WSJ and NYTimes when I tried running SnitchBench against it
  • AI News | VentureBeat: Google claims Gemini 2.5 Pro preview beats DeepSeek R1 and Grok 3 Beta in coding performance
  • www.techradar.com: Google upgrades Gemini 2.5 Pro's already formidable coding abilities
  • SiliconANGLE: Google revamps Gemini 2.5 Pro again, claiming superiority in coding and math
  • siliconangle.com: Google revamps Gemini 2.5 Pro again, claiming superiority in coding and math
  • the-decoder.com: Google Rolls Out Modest Improvements to Gemini 2.5 Pro
  • www.marktechpost.com: Google Introduces Open-Source Full-Stack AI Agent Stack Using Gemini 2.5 and LangGraph for Multi-Step Web Search, Reflection, and Synthesis
  • Maginative: Maginative article about how Google quietly upgraded Gemini 2.5 Pro.
  • Stack Overflow Blog: Ryan and Ben welcome Tulsee Doshi and Logan Kilpatrick from Google's DeepMind to discuss the advanced capabilities of the new Gemini 2.5, the importance of feedback loops for model improvement and reducing hallucinations, the necessity of great data for advancements, and enhancing developer experience through tool integration.

@workspaceupdates.googleblog.com //
Google is significantly expanding the integration of its Gemini AI model across the Google Workspace suite. A key focus is enhancing Google Chat with AI-powered features designed to improve user efficiency and productivity. One notable addition is the ability for Gemini to provide summaries of unread conversations directly within the Chat home view. This feature, which initially launched last year, has been expanded to support four additional languages: French, Italian, Japanese, and Korean, making it more accessible to a global user base. Users can activate the "Summarize" button upon navigating to a conversation to receive a quick, bulleted synopsis of the message content, allowing for rapid review of recent activity and prioritization of important conversations.

The new summaries in home feature in Google Chat is aimed at streamlining the user experience and helping users find what they need faster. It works by leveraging Gemini's ability to quickly process and condense information, providing users with a concise overview of their active conversations. To access these summaries, users need to ensure that smart features and personalization are turned on in their Google Workspace settings. This can be managed by administrators in the Admin console, or by individual users through their personal settings. The rollout of this feature is gradual, with both Rapid Release and Scheduled Release domains experiencing visibility within a 15-day period starting May 30, 2025.

Google is also exploring the potential of AI to revolutionize email management. Demis Hassabis, head of Google DeepMind, has expressed a desire to develop a "next-generation email" system that can intelligently sort through inboxes, respond to routine emails in a user's personal style, and automate simpler decisions. This initiative aims to alleviate the "tyranny of the email inbox" and free up users' time for more important tasks. Hassabis envisions an AI assistant that not only manages emails but also protects users' attention from other algorithms competing for their focus, ultimately serving the individual and enriching their life.

Recommended read:
References :
  • Google Workspace Updates: Preview summaries in the Google Chat home view with the help of Gemini in four additional languages
  • www.tomsguide.com: This article discusses the integration of Google's Gemini AI with Gmail and highlights a key discovery that surprises the author.
  • www.theguardian.com: Google working on AI email tool that can ‘answer in your style’

@cloud.google.com //
Google is pushing forward with AI integration across its cloud services and applications, introducing new capabilities to empower developers and enhance user experiences. A key development is the announcement of new MCP (Model Context Protocol) integrations to Google Cloud Databases. This aims to enable AI-assisted development by making it easier to connect databases to AI assistants within Integrated Development Environments (IDEs).

This new MCP integration leverages the MCP Toolbox for Databases, an open-source server that facilitates connections between generative AI agents and enterprise data. Supporting various databases including BigQuery, AlloyDB, Cloud SQL, Spanner, and even self-managed options like PostgreSQL and MySQL, the Toolbox allows developers to use MCP-compatible AI assistants to write code, design schemas, refactor code, generate testing data, and explore databases. Pre-built tools within the Toolbox streamline integration with Google Cloud databases directly from within preferred IDEs like VSCode, Claude Code, and Cursor.

Furthering its AI integration efforts, Google has made NotebookLM shareable via a simple link. NotebookLM, an AI-powered research notebook, allows users to compile research, summarize PDFs, and generate FAQs from uploaded source material. This update enables users to share their AI-generated content, such as summaries and briefing documents, with others, fostering collaboration and knowledge sharing. Viewers can interact with the AI-generated content by asking questions, though they cannot modify the source material. This new feature enhances NotebookLM's utility for creating study guides, product documentation, and strategic frameworks in a semi-interactive format.

Recommended read:
References :
  • AI & Machine Learning: Announcing new MCP integrations to Google Cloud Databases to enable AI-assisted development
  • Maginative: Google’s NotebookLM Now Lets You Share AI-Powered Notebooks With a Link

Chris McKay@Maginative //
Google's AI research notebook, NotebookLM, has introduced a significant upgrade that enhances collaboration by allowing users to publicly share their AI-powered notebooks with a simple link. This new feature, called Public Notebooks, enables users to share their research summaries and audio overviews generated by AI with anyone, without requiring sign-in or permissions. This move aims to transform NotebookLM from a personal research tool into an interactive, AI-powered knowledge hub, facilitating easier distribution of study guides, project briefs, and more.

The public sharing feature provides viewers with the ability to interact with AI-generated content like FAQs and overviews, as well as ask questions in chat. However, they cannot edit the original sources, ensuring the preservation of ownership while enabling discovery. To share a notebook, users can click the "Share" button, switch the setting to "Anyone with the link," and copy the link. This streamlined process is similar to sharing Google Docs, making it intuitive and accessible for users.

This upgrade is particularly beneficial for educators, startups, and nonprofits. Teachers can share curated curriculum summaries, startups can distribute product manuals, and nonprofits can publish donor briefing documents without the need to build a dedicated website. By enabling easier sharing of AI-generated notes and audio overviews, Google is demonstrating how generative AI can be integrated into everyday productivity workflows, making NotebookLM a more grounded tool for sense-making of complex material.

Recommended read:
References :
  • Maginative: Google’s NotebookLM Now Lets You Share AI-Powered Notebooks With a Link
  • The Official Google Blog: NotebookLM is adding a new way to share your own notebooks publicly.
  • PCMag Middle East ai: Google Makes It Easier to Share Your NotebookLM Docs, AI Podcasts
  • AI & Machine Learning: How Alpian is redefining private banking for the digital age with gen AI
  • venturebeat.com: Google quietly launches AI Edge Gallery, letting Android phones run AI without the cloud
  • TestingCatalog: Google’s Kingfall model briefly goes live on AI Studio before lockdown
  • shellypalmer.com: NotebookLM, one of Google's most viral AI products, just got a really useful upgrade: users can now publicly share notebooks with a link.

Ashutosh Singh@The Tech Portal //
Google has launched AI Edge Gallery, an open-source platform aimed at developers who want to deploy AI models directly on Android devices. This new platform allows for on-device AI execution using tools like LiteRT and MediaPipe, supporting models from Hugging Face. With future support for iOS planned, AI Edge Gallery emphasizes data privacy and low latency by eliminating the need for cloud connectivity, making it ideal for industries that require local processing of sensitive data.

The AI Edge Gallery app, released under the Apache 2.0 license and hosted on GitHub, is currently in an experimental Alpha release. The app integrates Gemma 3 1B, a compact 529MB language model, capable of processing up to 2,585 tokens per second on mobile GPUs, enabling tasks like text generation and image analysis in under a second. By using Google’s AI Edge platform, developers can leverage tools like MediaPipe and TensorFlow Lite to optimize model performance on mobile devices. The company is actively seeking feedback from developers and users.

AI Edge Gallery contains categories like ‘AI Chat’ and ‘Ask Image’ to guide users to relevant tools, as well as a ‘Prompt Lab’ for testing and refining prompts. On-device AI processing ensures that complex AI tasks can be performed without transmitting data to external servers, reducing potential security risks and improving response times. While newer devices with high-performance chips can run models smoothly, older phones may experience lag. Google is also planning to launch the app on iOS soon.

Recommended read:
References :
  • The Tech Portal: Google rolls out ‘AI Edge Gallery’ app for Android that lets you run AI models locally on device
  • www.infoworld.com: Google’s AI Edge Gallery will let developers deploy offline AI models — here’s how it works
  • www.zdnet.com: This new Google app lets you use AI on your phone without the internet - here's how
  • developers.googleblog.com: The 529MB Gemma 3 1B model delivers up to 2,585 tokens per second during on mobile GPUs, enabling sub-second tasks like text generation and image analysis.
  • venturebeat.com: Google quietly launched AI Edge Gallery, an experimental Android app that runs AI models offline without internet, bringing Hugging Face models directly to smartphones with enhanced privacy.

Ashutosh Singh@The Tech Portal //
Google has launched the 'AI Edge Gallery' app for Android, with plans to extend it to iOS soon. This innovative app enables users to run a variety of AI models locally on their devices, eliminating the need for an internet connection. The AI Edge Gallery integrates models from Hugging Face, a popular AI repository, allowing for on-device execution. This approach not only enhances privacy by keeping data on the device but also offers faster processing speeds and offline functionality, which is particularly useful in areas with limited connectivity.

The app uses Google’s AI Edge platform, which includes tools like MediaPipe and TensorFlow Lite, to optimize model performance on mobile devices. A key model utilized is Gemma 31B, a compact language model designed for mobile platforms that can process data rapidly. The AI Edge Gallery features an interface with categories like ‘AI Chat’ and ‘Ask Image’ to help users find the right tools. Additionally, a ‘Prompt Lab’ is available for users to experiment with and refine prompts.

Google is emphasizing that the AI Edge Gallery is currently an experimental Alpha release and is encouraging user feedback. The app is open-source under the Apache 2.0 license, allowing for free use, including for commercial purposes. However, the performance of the app may vary based on the device's hardware capabilities. While newer phones with advanced processors can run models smoothly, older devices might experience lag, particularly with larger models.

In related news, Google Cloud has introduced advancements to BigLake, its storage engine designed to create open data lakehouses on Google Cloud that are compatible with Apache Iceberg. These enhancements aim to eliminate the need to sacrifice open-format flexibility for high-performance, enterprise-grade storage management. These updates include Open interoperability across analytical and transactional systems:

The BigLake metastore provides the foundation for interoperability, allowing you to access all your Cloud Storage and BigQuery storage data across multiple runtimes including BigQuery, AlloyDB (preview), and open-source, Iceberg-compatible engines such as Spark and Flink.New, high-performance Iceberg-native Cloud Storage:

We are simplifying lakehouse management with automatic table maintenance (including compaction and garbage collection) and integration with Google Cloud Storage management tools, including auto-class tiering and encryption.

Recommended read:
References :
  • Data Analytics: BigLake evolved: Build open, high-performance, enterprise Iceberg-native lakehouses
  • The Tech Portal: Google rolls out ‘AI Edge Gallery’ app for Android that lets you run AI models locally on device
  • www.infoworld.com: Google Cloud’s BigLake-driven lakehouse updates aim to optimize performance, costs
  • TechCrunch: Last week, Google quietly released an app that lets users run a range of openly available AI models from the AI dev platform Hugging Face on their phones.
  • Neowin: Google's new AI Edge Gallery app brings offline AI to your Android (and soon iOS) device
  • www.infoworld.com: Google has launched AI Edge Gallery, an open-source platform that enables developers to run advanced AI models directly on Android devices, with iOS support planned for a future release.
  • www.zdnet.com: This new Google app lets you use AI on your phone without the internet - here's how
  • Techzine Global: New Google app runs AI offline on smartphones
  • venturebeat.com: Google quietly launches AI Edge Gallery, letting Android phones run AI without the cloud
  • Dataconomy: Google released the Google AI Edge Gallery app last week, enabling users to download and run AI models from Hugging Face on their phones.

Ellie Ramirez-Camara@Data Phoenix //
Google has recently unveiled a suite of advancements in its AI media generation models at Google I/O 2025, signaling a major leap forward in the field. The highlights include the launch of Veo 3, the first video generation model from Google with integrated audio capabilities, alongside Imagen 4, and Flow, an AI filmmaking tool. These new tools and upgrades to Veo 2 are designed to provide creators with enhanced realism, emotional nuance, and coherence in AI-generated content. These upgrades are designed to target professional markets and are available to Ultra subscribers via the Gemini app and Flow platform.

The most notable announcement was Veo 3, which allows users to generate videos with synchronized audio, including ambient sounds, dialogue, and environmental noise. This model understands complex prompts, enabling users to create short stories brought to life with realistic physics and accurate lip-syncing. Veo 2 also received significant updates, including the ability to use images as references for character and scene consistency, precise camera controls, outpainting capabilities, and object manipulation tools. These enhanced features for Veo 2 are aimed at providing filmmakers with greater creative control.

Also introduced was Flow, an AI-powered video creation tool that integrates the Veo, Imagen, and Gemini models into a comprehensive platform. Flow allows creators to manage story elements such as cast, locations, objects, and styles in one interface, enabling them to combine reference media with natural language narratives to generate scenes. Google also introduced "AI Mode" in Google Search and Jules, a powerful new asynchronous coding agent. These advancements are part of Google's broader effort to lead in AI innovation, targeting professional markets with sophisticated tools that simplify the creation of high-quality media content.

Recommended read:
References :
  • Data Phoenix: Google announced several updates across its media generation models
  • pub.towardsai.net: TAI #154: Gemini Deep Think, Veo 3’s Audio Breakthrough, & Claude 4’s Blackmail Drama
  • Ars OpenForum: Google's Veo 3 delivers AI videos of realistic people with sound and music. We put it to the test.
  • hothardware.com: Google I/O was about a week ago, and if you haven't heard, one of Google's biggest announcements was the company's Veo 3 generative AI model for video.
  • The Tech Basic: Google Veo 3 is a new tool that makes eight-second video clips at 720p resolution with matching sound effects and spoken words. It takes a text description or a still image and turns it into moving pictures.
  • THE DECODER: Google says Veo 3 users have generated millions of AI videos in just a few days

Tripty@techvro.com //
Google has begun rolling out automatic email summarization powered by its Gemini AI model within the Gmail mobile app. This new feature aims to streamline the process of reviewing lengthy email threads by providing a concise summary at the top of the message content, without requiring manual activation. The Gemini-generated summaries are designed to help users quickly grasp the main points of an email thread, especially when dealing with complex or multi-reply conversations. This initiative reflects Google's broader strategy to integrate AI more seamlessly across its Workspace applications to enhance user productivity and efficiency.

The automatic summarization feature is currently available for English-language emails on Android and iOS devices, specifically for Google Workspace Business and Enterprise users, as well as Google One AI Premium subscribers. As new replies are added to an email thread, the summaries are dynamically updated to reflect the latest information. Users who prefer manual control can collapse the summary cards if they find them unhelpful, and they can still use the "Summarize this email" button for messages where the automatic feature isn't triggered. This rollout follows Google's push to embed Gemini across its products.

Google emphasizes its commitment to user data protection and privacy with this AI integration. Users need to have smart features and personalization turned on in Gmail, Chat, and Meet, as well as smart features in Google Workspace. This new capability has been generally available since May 29, 2025. While it is currently limited to mobile devices, Google may consider expanding the feature to desktop users in the future. Google also has indicated that they plan to add more languages at a later date.

Recommended read:
References :
  • Google Workspace Updates: New Gemini summary cards now available in the Gmail app on Android and iOS devices
  • thetechbasic.com: No More Reading Long Emails? Google’s New Gemini Feature
  • Maginative: Gmail's Gemini Summaries Now Appear Automatically on Mobile
  • The Tech Basic: Gmail users on their phones may see a different experience now. From now on, Google’s AI assistant called Gemini will create summaries for long emails.
  • techvro.com: Google is rolling out automatic AI summaries in Gmail for mobile users, helping summarize long email threads and save time on Android and iOS.
  • www.zdnet.com: You no longer have to manually start Gemini summaries for long email chains, but you can also opt out if you don’t want them.
  • PCMag Middle East ai: AI summary cards generated by Google Gemini are now automatically appearing in some emails. The feature is currently limited to the mobile app for Workspace accounts.
  • AlternativeTo: Google has introduced Gemini-powered summary cards on the Gmail app for Android and iOS devices, offering automatic synopses for email with several replies or lengthy discussions.
  • www.tomsguide.com: A personal experience discussing the surprising aspects of using Google's Gemini for Gmail summarization.
  • workspaceupdates.googleblog.com: Overview of Gemini's features and capabilities, including its email summarization function.
  • arstechnica.com: Gmail app will now create AI summaries
  • lifehacker.com: Gmail Will Automatically Summarize Your Emails Using Gemini AI
  • arstechnica.com: The Gmail app will now create AI summaries whether you want them or not. Workspace users will be seeing a lot more of Google's AI summaries soon.
  • Ars OpenForum: Workspace users will be seeing a lot more of Google's AI summaries soon.

S.Dyema Zandria@The Tech Basic //
Google is pushing the boundaries of AI video generation with the introduction of Veo 3, a model that now features native audio capabilities. Unveiled at Google I/O 2025, Veo 3 stands out as the first of its kind, capable of producing fully synchronized audio directly within the video output. This includes realistic dialogue, environmental background noise, and even music, making the generated videos more immersive than ever before. Google has also launched Flow, an AI filmmaking interface.

Veo 3 has been tested and can produce videos of realistic people with sound and music. Veo 3 can produce eight-second video clips at 720p resolution with matching sound effects and spoken words. To create a video, users can provide a text description or a still image, which Veo 3 then transforms into moving pictures. The model uses a diffusion method, learning from a vast dataset of real videos to generate scenes. A language model then ensures that the video accurately reflects the provided prompt, while an audio model adds sound effects and dialogue.

Google is making Veo 3 available to its Ultra subscribers through the Gemini app and Flow platform. Enterprise users can also access Veo 3 on Vertex AI. While Veo 3 initially launched for US users of AI Ultra at twelve thousand five hundred credits per month for two hundred fifty dollars, Google quickly expanded availability to seventy-one more countries outside the EU. This move underscores Google's commitment to pushing the limits of AI-generated content.

Recommended read:
References :
  • pub.towardsai.net: TAI #154: Gemini Deep Think, Veo 3’s Audio Breakthrough, & Claude 4’s Blackmail Drama
  • Ars OpenForum: Google's Veo 3 delivers AI videos of realistic people with sound and music. We put it to the test.
  • hothardware.com: Google I/O was about a week ago, and if you haven't heard, one of Google's biggest announcements was the company's Veo 3 generative AI model for video. Gone are the days of creepy, low-quality clips that vaguely look like Will Smith eating spaghetti and don't traverse the uncanny valley very well. Veo 3 is more than capable of generating that
  • The Tech Basic: Google Veo 3 is a new tool that makes eight-second video clips at 720p resolution with matching sound effects and spoken words. It takes a text description or a still image and turns it into moving pictures. It uses a method called diffusion to learn from real videos that it saw during training.

Aminu Abdullahi@eWEEK //
Google has unveiled significant advancements in its AI-driven media generation capabilities at Google I/O 2025, showcasing updates to Veo, Imagen, and Flow. The updates highlight Google's commitment to pushing the boundaries of AI in video and image creation, providing creators with new and powerful tools. A key highlight is the introduction of Veo 3, the first video generation model with integrated audio capabilities, addressing a significant challenge in AI-generated media by enabling synchronized audio creation for videos.

Veo 3 allows users to generate high-quality visuals with synchronized audio, including ambient sounds, dialogue, and environmental noise. According to Google, the model excels at understanding complex prompts, bringing short stories to life in video format with realistic physics and accurate lip-syncing. Veo 3 is currently available to Ultra subscribers in the US through the Gemini app and Flow platform, as well as to enterprise users via Vertex AI, demonstrating Google’s intent to democratize AI-driven content creation across different user segments.

In addition to Veo 3, Google has launched Imagen 4 and Flow, an AI filmmaking tool, alongside major updates to Veo 2. Veo 2 is receiving enhancements with filmmaker-focused features, including the use of images as references for character and scene consistency, precise camera controls, outpainting capabilities, and object manipulation tools. Flow integrates the Veo, Imagen, and Gemini models into a comprehensive platform allowing creators to manage story elements and create content with natural language narratives, making it easier than ever to bring creative visions to life.

Recommended read:
References :
  • Data Phoenix: Google updated its model lineup and introduced a 'Deep Think' reasoning mode for Gemini 2.5 Pro
  • Maginative: Google’s revamped Canvas, powered by the Gemini 2.5 Pro model, lets you turn ideas into apps, quizzes, podcasts, and visuals in seconds—no code required.
  • Replicate's blog: Generate incredible images with Google's Imagen-4
  • AI News | VentureBeat: At Google I/O, Sergey Brin makes surprise appearance — and declares Google will build the first AGI
  • www.tomsguide.com: I just tried Google’s smart glasses built on Android XR — and Gemini is the killer feature
  • Data Phoenix: Google has launched major Gemini updates, including free visual assistance via Gemini Live, new subscription tiers starting at $19.99/month, advanced creative tools like Veo 3 for video generation with native audio, and an upcoming autonomous Agent Mode for complex task management.
  • sites.libsyn.com: Google's VEO 3 Is Next Gen AI Video, Gemini Crushes at Google I/O & OpenAI's Big Bet on Jony Ive
  • eWEEK: Google’s Co-Founder in Office ‘Pretty Much Every Day’ to Work on AI
  • learn.aisingapore.org: Advancing Gemini’s security safeguards – Google DeepMind
  • Google DeepMind Blog: Gemini 2.5: Our most intelligent models are getting even better
  • TestingCatalog: Opus 4 outperforms GPT-4.1 and Gemini 2.5 Pro in coding benchmarks
  • LearnAI: Updates to Gemini 2.5 from Google DeepMind
  • pub.towardsai.net: This week, Google’s flagship I/O 2025 conference and Anthropic’s Claude 4 release delivered further advancements in AI reasoning, multimodal and coding capabilities, and somewhat alarming safety testing results.
  • learn.aisingapore.org: Updates to Gemini 2.5 from Google DeepMind
  • Data Phoenix: Google announced several updates across its media generation models
  • thezvi.wordpress.com: Fun With Veo 3 and Media Generation
  • Maginative: Google Gemini Can Now Watch Your Videos on Google Drive
  • www.marktechpost.com: A Coding Guide for Building a Self-Improving AI Agent Using Google’s Gemini API with Intelligent Adaptation Features

@learn.aisingapore.org //
Google DeepMind has unveiled Gemma 3n, a groundbreaking compact and highly efficient multimodal AI model designed for real-time, on-device use. This innovation addresses the rising demand for faster, smarter, and more private AI experiences directly on mobile devices such as phones, tablets, and laptops. By embedding intelligence locally, developers can unlock near-instant responsiveness, reduce memory demands, and enhance user privacy. Gemma 3n is optimized for Android and Chrome platforms, targeting performance across these mobile environments and serving as the foundation for the next version of Gemini Nano.

Gemma 3n leverages a novel Google DeepMind innovation called Per-Layer Embeddings (PLE), significantly reducing RAM usage. This technology allows the model to operate with a dynamic memory footprint of just 2GB and 3GB, even though its raw parameter count is 5B and 8B. This makes it possible to run larger models on mobile devices or live-stream from the cloud, with memory overhead comparable to smaller models. The model also incorporates advanced activation quantization and KVC sharing to improve on-device performance and efficiency, responding approximately 1.5x faster on mobile with significantly better quality compared to previous models.

In addition to Gemma 3n, Google is also experimenting with Gemini Diffusion, an innovative system that generates text using diffusion techniques rather than traditional word-by-word prediction. This approach, inspired by image generation, refines noise in multiple passes to create full sections of text. DeepMind says this leads to more consistent and logically connected output, making it particularly effective for tasks requiring precision, coherence, and iteration, such as code generation and text editing. Gemini Diffusion achieves speeds of up to 2,000 tokens per second on programming tasks, demonstrating its potential as a fast and competitive alternative to autoregressive models.

Recommended read:
References :
  • Google DeepMind Blog: Announcing Gemma 3n preview: Powerful, efficient, mobile-first AI
  • THE DECODER: Gemini Diffusion could be Google's most important I/O news that slipped under the radar
  • www.marktechpost.com: Google DeepMind Releases Gemma 3n: A Compact, High-Efficiency Multimodal AI Model for Real-Time On-Device Use
  • LearnAI: Following the exciting launches of Gemma 3 and Gemma 3 QAT, our family of state-of-the-art open models capable of running on a single cloud or desktop accelerator, we’re pushing our vision for accessible AI even further.

@zdnet.com //
Google has officially launched Flow, an AI-powered filmmaking tool designed to simplify the creation of cinematic videos. Unveiled at Google I/O 2025, Flow leverages Google's advanced AI models, including Veo for video generation, Imagen for image production, and Gemini for orchestration through natural language. This new platform is an evolution of the earlier experimental VideoFX project and aims to make it easier for storytellers to conceptualize, draft, and refine video sequences using AI. Flow provides a creative toolkit for video makers, positioning itself as a storytelling platform rather than just a simple video generator.

Flow acts as a hybrid tool that combines the strengths of Veo, Imagen, and Gemini. Veo 3, the improved video model underneath Flow, adds motion and realism meant to mimic physics, marking a step forward in dynamic content creation, even allowing for the generation of sound effects, background sounds, and character dialogue directly within videos. With Imagen, users can create visual assets from scratch and bring them into their Flow projects. Gemini helps fine-tune output, adjusting timing, mood, or even narrative arcs through conversational inputs. The platform focuses on continuity and filmmaking, allowing users to reuse characters or scenes across multiple clips while maintaining consistency.

One of Flow's major appeals is its ability to handle visual consistency, enabling scenes to blend into one another with more continuity than earlier AI systems. Filmmakers can not only edit transitions but also set camera positions, plan pans, and tweak angles. For creators frustrated by scattered generations and unstructured assets, Flow introduces a management system that organizes files, clips, and even the text used to create them. Currently, Flow is accessible to users in the U.S. subscribed to either the AI Pro or AI Ultra tiers. The Pro plan includes 100 video generations per month, while Ultra subscribers receive unlimited generations and earlier access to Veo 3, which will support built-in audio, costing $249.99 monthly.

Recommended read:
References :
  • Analytics Vidhya: Google I/O 2025: AI Mode on Google Search, Veo 3, Imagen 4, Flow, Gemini Live, and More
  • TestingCatalog: Google prepares to launch Flow, a new video editing tool, at I/O 2025
  • AI & Machine Learning: Expanding Vertex AI with the next wave of generative AI media models
  • AI News | VentureBeat: Google just leapfrogged every competitor with mind-blowing AI that can think deeper, shop smarter, and create videos with dialogue
  • www.techradar.com: Google's Veo 3 marks the end of AI video's 'silent era'
  • www.zdnet.com: Google Flow is a new AI video generator meant for filmmakers - how to try it today
  • www.techradar.com: Want to be the next Spielberg? Google’s AI-powered Flow could bring your movie ideas to life
  • the-decoder.com: Google showed off a range of new features for creators, developers, and everyday users at I/O 2025, beyond its headline announcements about search and AI models.
  • Digital Information World: At its annual I/O event, Google introduced a new AI-based application called , positioned as a creative toolkit for video makers.
  • Maginative: Google has launched Flow, a new AI-powered filmmaking tool designed to simplify cinematic clip creation and scene extension using its advanced Veo, Imagen, and Gemini models.
  • www.tomsguide.com: Google Veo 3 and Flow: The future of AI filmmaking is here — here’s how it works
  • THE DECODER: Google shows AI filmmaking tool, XR glasses and launches $250 Gemini subscription

@zdnet.com //
Google is expanding access to its AI-powered research assistant, NotebookLM, with the launch of a standalone mobile app for Android and iOS devices. This marks a significant step for NotebookLM, transitioning it from a web-based beta tool to a more accessible platform for mobile users. The app retains core functionalities like source-grounded summaries and interactive Q&A, while also introducing new audio-first features designed for on-the-go content consumption. This release aligns with Google's broader strategy to integrate AI into its products, offering users a flexible way to absorb and interact with structured knowledge.

The NotebookLM mobile app places a strong emphasis on audio interaction, featuring AI-generated podcast-style summaries that can be played directly from the project list. Users can generate these audio overviews with a quick action button, creating an experience akin to a media player. The app also supports interactive mode during audio sessions, allowing users to ask questions mid-playback and participate in live dialogue. This focus on audio content consumption and interaction differentiates the mobile app and suggests that passive listening and educational use are key components of the intended user experience.

The mobile app mirrors the web-based layout, offering functionalities across Sources, Chat, and Interactive Assets, including Notes, Audio Overviews, and Mind Maps. Users can now add sources directly from their mobile devices by using the "Share" button in any app. The new NotebookLM app aims to be a research assistant that is accessible to students, researchers, and content creators, providing a mobile solution for absorbing structured knowledge.

Recommended read:
References :
  • TestingCatalog: Google launches NotebookLM mobile app with audio-first features on mobile
  • www.tomsguide.com: Google just added NotebookLM to Android — here's how it can level up your note-taking
  • www.zdnet.com: Google's popular AI tool gets its own Android app - how to use NotebookLM on your phone
  • THE DECODER: Google launches NotebookLM mobile app for Android and iOS
  • www.marktechpost.com: Google AI Releases Standalone NotebookLM Mobile App with Offline Audio and Seamless Source Integration
  • blog.google: an illustrated hero image with a smartphone showing the NotebookLM app next to the embedded text "NotebookLM"
  • www.laptopmag.com: On Monday Google launched the NotebookLM stand-alone app for Android and iOS, ahead of Google I/O 2025.

Alexey Shabanov@TestingCatalog //
Google has launched the NotebookLM mobile app for Android and iOS, bringing its AI-powered research assistant to mobile devices. This release marks a significant step in expanding access to NotebookLM, which was initially launched as a web-based tool in 2023 under the codename "Project Tailwind." The mobile app aims to offer personalized learning and efficient content synthesis, allowing users to interact with and process information on the go. The app is officially available to everyone after months of waiting, offering the core features of NotebookLM, with the promise of continued functionality additions.

The NotebookLM mobile app focuses on audio-first experiences, with features like audio overviews that generate podcast-style summaries. These summaries can be played directly from the list view without opening a project, making it feel like a media player for casual content consumption. Users can also download audio overviews for offline playback and listen in the background, supporting learning during commutes or other activities. Moreover, the app supports interactive mode in audio sessions, where users can ask questions mid-playback, creating a live dialogue experience.

The mobile app retains the functionality of the web version, including the ability to create new notebooks and upload sources like PDFs, Google Docs, and YouTube videos. Users can add sources directly from their mobile devices by using the "Share" button in any app, making it easier to build and maintain research libraries. NotebookLM relies only on user-uploaded sources, ensuring reliable and verifiable information. The rollout underscores Google’s evolving strategy for NotebookLM, transitioning from a productivity assistant to a multimodal content platform, appealing to students, researchers, and content creators seeking flexible ways to absorb structured knowledge.

Recommended read:
References :
  • AI News | VentureBeat: Google finally launches NotebookLM mobile app at I/O: hands-on, first impressions
  • www.laptopmag.com: An exclusive look at Google's NotebookLM app on Android and iOS
  • TestingCatalog: Google launches NotebookLM mobile app with audio-first features on mobile
  • www.tomsguide.com: NotebookLM just arrived on Android — and it can turn your notes into podcasts
  • THE DECODER: Google launches NotebookLM mobile app for Android and iOS
  • MarkTechPost: Google AI Releases Standalone NotebookLM Mobile App with Offline Audio and Seamless Source Integration
  • the-decoder.com: Google launches NotebookLM mobile app for Android and iOS
  • www.marktechpost.com: Google AI Releases Standalone NotebookLM Mobile App with Offline Audio and Seamless Source Integration
  • www.techradar.com: Google's free NotebookLM AI app is out now for Android and iOS – here's why it's a day-one download for me