News from the AI & ML world

DeeperML - #google

@blog.google //
Google is expanding access to its Gemini AI app to all Google Workspace for Education users, marking a significant step in integrating AI into educational settings. This rollout, announced on June 20, 2025, provides educators and students with a range of AI-powered tools. These tools include real-time support for learning, assistance in creating lesson plans, and capabilities for providing feedback on student work, all designed to enhance the learning experience and promote AI literacy. The Gemini app is covered under the Google Workspace for Education Terms of Service, ensuring enterprise-grade data protection and compliance with regulations like FERPA, COPPA, FedRamp, and HIPAA.

A key aspect of this expansion is the implementation of stricter content policies for users under 18. These policies are designed to prevent potentially inappropriate or harmful responses, creating a safer online environment for younger learners. Additionally, Google is introducing a youth onboarding experience with AI literacy resources, endorsed by ConnectSafely and the Family Online Safety Institute, to guide students in using AI responsibly. The first time a user asks a fact-based question, a "double-check response" feature, powered by Google Search, will automatically run to validate the answer.

Gemini incorporates LearnLM, Google’s family of models fine-tuned for learning and built with experts in education, making it a leading model for educational purposes. To ensure responsible use, Google provides resources for educators, including a Google teacher center offering guidance on incorporating Gemini into lesson plans and teaching responsible AI practices. Administrators can manage user access to the Gemini app through the Google Workspace Admin Help Center, allowing them to set up groups or organizational units to control access within their domain and tailor the AI experience to specific educational needs.

Recommended read:
References :
  • edu.google.com: The Gemini app is covered under the for all Workspace for Education users.
  • Google Workspace Updates: The Gemini app is now available to all education users.
  • chromeunboxed.com: This URL is about the Gemini app now available for all Education users, with extra safeguards for younger students.

@colab.research.google.com //
References: Magenta , THE DECODER , github.com ...
Google's Magenta project has unveiled Magenta RealTime (Magenta RT), an open-weights live music model designed for interactive music creation, control, and performance. This innovative model builds upon Google DeepMind's research in real-time generative music, providing opportunities for unprecedented live music exploration. Magenta RT is a significant advancement in AI-driven music technology, offering capabilities for both skill-gap accessibility and enhancement of existing musical practices. As an open-weights model, Magenta RT is targeted towards eventually running locally on consumer hardware, showcasing Google's commitment to democratizing AI music creation tools.

Magenta RT, an 800 million parameter autoregressive transformer model, was trained on approximately 190,000 hours of instrumental stock music. It leverages SpectroStream for high-fidelity audio (48kHz stereo) and a newly developed MusicCoCa embedding model, inspired by MuLan and CoCa. This combination allows users to dynamically shape and morph music styles in real-time by manipulating style embeddings, effectively blending various musical styles, instruments, and attributes. The model code is available on Github and the weights are available on Google Cloud Storage and Hugging Face under permissive licenses with some additional bespoke terms.

Magenta RT operates by generating music in sequential chunks, conditioned on both previous audio output and style embeddings. This approach enables the creation of interactive soundscapes for performances and virtual spaces. Impressively, the model achieves a real-time factor of 1.6 on a Colab free-tier TPU (v2-8 TPU), generating two seconds of audio in just 1.25 seconds. This technology unlocks the potential to explore entirely new musical landscapes, experiment with never-before-heard instrument combinations, and craft unique sonic textures, ultimately fostering innovative forms of musical expression and performance.

Recommended read:
References :
  • Magenta: Today, we’re happy to share a research preview of Magenta RealTime (Magenta RT), an open-weights live music model that allows you to interactively create, control and perform music in the moment.
  • THE DECODER: Google has released Magenta RealTime (Magenta RT), an open-source AI model for live music creation and control. The article appeared first on The Decoder.
  • the-decoder.com: Google has released Magenta RealTime (Magenta RT), an open-source AI model for live music creation and control. The article appeared first on .
  • github.com: Magenta RealTime: An Open-Weights Live Music Model
  • aistudio.google.com: Magenta RealTime: An Open-Weights Live Music Model
  • huggingface.co: Sharing a research preview of Magenta RealTime (Magenta RT), an open-weights live music model that allows you to interactively create, control and perform music in the moment
  • Magenta: Magenta RealTime: An Open-Weights Live Music Model
  • Magenta: Magenta RT is the latest in a series of models and applications developed as part of the Magenta Project.
  • www.marktechpost.com: Google Researchers Release Magenta RealTime: An Open-Weight Model for Real-Time AI Music Generation
  • Simon Willison's Weblog: Fun new "live music model" release from Google DeepMind: Today, we’re happy to share a research preview of Magenta RealTime (Magenta RT), an open-weights live music model that allows you to interactively create, control and perform music in the moment.
  • MarkTechPost: Google’s Magenta team has introduced Magenta RealTime (Magenta RT), an open-weight, real-time music generation model that brings unprecedented interactivity to generative audio.

Ellie Ramirez-Camara@Data Phoenix //
Google has recently launched an experimental feature that leverages its Gemini models to create short audio overviews for certain search queries. This new feature aims to provide users with an audio format option for grasping the basics of unfamiliar topics, particularly beneficial for multitasking or those who prefer auditory learning. Users who participate in the experiment will see the option to generate an audio overview on the search results page, which Google determines would benefit from this format.

When an audio overview is ready, it will be presented to the user with an audio player that offers basic controls such as volume, playback speed, and play/pause buttons. Significantly, the audio player also displays relevant web pages, allowing users to easily access more in-depth information on the topic being discussed in the overview. This feature builds upon Google's earlier work with audio overviews in NotebookLM and Gemini, where it allowed for the creation of podcast-style discussions and audio summaries from provided sources.

Google is also experimenting with a new feature called Search Live, which enables users to have real-time verbal conversations with Google’s Search tools, providing interactive responses. This Gemini-powered AI simulates a friendly and knowledgeable human, inviting users to literally talk to their search bar. The AI doesn't stop listening after just one question but rather engages in a full dialogue, functioning in the background even when the user leaves the app. Google refers to this system as “query fan-out,” which means that instead of just answering your question, it also quietly considers related queries, drawing in more diverse sources and perspectives.

Additionally, Gemini on Android can now identify songs, similar to the functionality previously offered by Google Assistant. Users can ask Gemini, “What song is this?” and the chatbot will trigger Google’s Song Search interface, which can recognize music from the environment, a playlist, or even if the user hums the tune. However, unlike the seamless integration of Google Assistant’s Now Playing feature, this song identification process is not fully native to Gemini. When initiated, it launches a full-screen listening interface from the Google app, which feels a bit clunky and doesn't stay within Gemini Live’s conversational experience.

Recommended read:
References :
  • Data Phoenix: Google's newest experiment brings short audio overviews to some Search queries
  • the-decoder.com: Google is rolling out a new feature called Audio Overviews in its Search Labs. The article appeared first on .
  • thetechbasic.com: Google has begun rolling out Search Live in AI Mode for its Android and iOS apps in the United States. This new feature invites users to speak naturally and receive real‑time, spoken answers powered by a custom version of Google’s Gemini model. Search Live combines the conversational strengths of Gemini with the full breadth of […] The post first appeared on .
  • chromeunboxed.com: The transition from Google Assistant to Gemini, while exciting in many ways, has come with a few frustrating growing pains. As Gemini gets smarter with complex tasks, we’ve sometimes lost the simple, everyday features we relied on with Assistant.

@www.marktechpost.com //
Google has unveiled a new AI model designed to forecast tropical cyclones with improved accuracy. Developed through a collaboration between Google Research and DeepMind, the model is accessible via a newly launched website called Weather Lab. The AI aims to predict both the path and intensity of cyclones days in advance, overcoming limitations present in traditional physics-based weather prediction models. Google claims its algorithm achieves "state-of-the-art accuracy" in forecasting cyclone track and intensity, as well as details like formation, size, and shape.

The AI model was trained using two extensive datasets: one describing the characteristics of nearly 5,000 cyclones from the past 45 years, and another containing millions of weather observations. Internal testing demonstrated the algorithm's ability to accurately predict the paths of recent cyclones, in some cases up to a week in advance. The model can generate 50 possible scenarios, extending forecast capabilities up to 15 days.

This breakthrough has already seen adoption by the U.S. National Hurricane Center, which is now using these experimental AI predictions alongside traditional forecasting models in its operational workflow. Google's AI's ability to forecast up to 15 days in advance marks a significant improvement over current models, which typically provide 3-5 day forecasts. Google made the AI accessible through a new website called Weather Lab. The model is available alongside two years' worth of historical forecasts, as well as data from traditional physics-based weather prediction algorithms. According to Google, this could help weather agencies and emergency service experts better anticipate a cyclone’s path and intensity.

Recommended read:
References :
  • siliconangle.com: Google LLC today detailed an artificial intelligence model that can forecast the path and intensity of tropical cyclones days in advance.
  • AI News | VentureBeat: Google DeepMind just changed hurricane forecasting forever with new AI model
  • MarkTechPost: Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty Assessment
  • Maginative: Google's AI Can Now Predict Hurricane Paths 15 Days Out — and the Hurricane Center Is Using It
  • SiliconANGLE: Google develops AI model for forecasting tropical cyclones. According to the company, the algorithm was developed through a collaboration between its Google Research and DeepMind units. It’s available through a newly launched website called Weather Lab.
  • The Official Google Blog: Weather Lab is an interactive website for sharing Google’s AI weather models.
  • www.engadget.com: Google DeepMind is sharing its AI forecasts with the National Weather Service
  • www.producthunt.com: Predicting cyclone paths & intensity 15 days ahead |
  • the-decoder.com: Google Deepmind launches Weather Lab to test AI models for tropical cyclone forecasting
  • AIwire: Google DeepMind Launches Interactive AI That Lets You Explore Storm Forecasts
  • www.aiwire.net: Google DeepMind and Google Research are launching Weather Lab - a new AI-driven platform designed specifically to improve forecasts for tropical cyclone formation, intensity, and trajectory.

@cloud.google.com //
Google Cloud is offering Financial Services Institutions (FSIs) a powerful solution to streamline and enhance their Know Your Customer (KYC) processes by leveraging the Agent Development Kit (ADK) in combination with Gemini models and Search Grounding. KYC processes are critical for regulatory compliance and risk mitigation, involving the verification of customer identities and the assessment of associated risks. Traditional KYC methods are often manual, time-consuming, and prone to errors, which can be challenging in today's environment where customers expect instant approvals. The Agent Development Kit (ADK) is a flexible and modular framework for developing and deploying AI agents. While optimized for Gemini and the Google ecosystem, ADK is model-agnostic, deployment-agnostic, and is built for compatibility with other frameworks. ADK was designed to make agent development feel more like software development, to make it easier for developers to create, deploy, and orchestrate agentic architectures that range from simple tasks to complex workflows.

The ADK simplifies the creation and orchestration of agents, handling agent definition, tool integration, state management, and inter-agent communication. These agents are powered by Gemini models hosted on Vertex AI, providing core reasoning, instruction-following, and language understanding capabilities. Gemini's multimodal analysis, including image processing from IDs and documents, and multilingual support further enhances the KYC process for diverse customer bases. By incorporating Search Grounding, the system connects Gemini responses to real-time information from Google Search, reducing hallucinations and increasing the reliability of the information provided. Furthermore, integration with BigQuery allows secure interaction with internal datasets, ensuring comprehensive data access while maintaining data security.

The multi-agent architecture offers several key benefits for FSIs including improved efficiency through the automation of large portions of the KYC workflow, reducing manual effort and turnaround times. AI is leveraged for consistent document analysis and comprehensive external checks, leading to enhanced accuracy. The solution also strengthens compliance by improving auditability through clear reporting and source attribution via grounding. Google Cloud provides resources to get started, including $300 in free credit for new customers to build and test proof of concepts, along with free monthly usage of over 20 AI-related products and APIs. The combination of ADK, Gemini models, Search Grounding, and BigQuery integration represents a significant advancement in KYC processes, offering FSIs a robust and efficient solution to meet regulatory requirements and improve customer experience.

Recommended read:
References :
  • AI & Machine Learning: Discusses how Google's Agent Development Kit (ADK) and Gemini can be used to build multi-agent KYC workflows.
  • google.github.io: Simplifies the creation and orchestration of agents. ADK handles agent definition, tool integration, state management, and inter-agent communication. It’s a platform and model-agnostic agentic framework which provides the scaffolding upon which complex agentic workflows can be built.
  • Lyzr AI: AI Agents for KYC Verification: Automating Compliance with Intelligent Workflows

Michael Nuñez@AI News | VentureBeat //
Google has recently rolled out its latest Gemini 2.5 Flash and Pro models on Vertex AI, bringing advanced AI capabilities to enterprises. The release includes the general availability of Gemini 2.5 Flash and Pro, along with a new Flash-Lite model available for testing. These updates aim to provide organizations with the tools needed to build sophisticated and efficient AI solutions.

The Gemini 2.5 Flash model is designed for speed and efficiency, making it suitable for tasks such as large-scale summarization, responsive chat applications, and data extraction. Gemini 2.5 Pro handles complex reasoning, advanced code generation, and multimodal understanding. Additionally, the new Flash-Lite model offers cost-efficient performance for high-volume tasks. These models are now production-ready within Vertex AI, offering the stability and scalability needed for mission-critical applications.

Google CEO Sundar Pichai has highlighted the improved performance of the Gemini 2.5 Pro update, particularly in coding, reasoning, science, and math. The update also incorporates feedback to improve the style and structure of responses. The company is also offering Supervised Fine-Tuning (SFT) for Gemini 2.5 Flash, enabling enterprises to tailor the model to their unique data and needs. A new updated Live API with native audio is also in public preview, designed to streamline the development of complex, real-time audio AI systems.

Recommended read:
References :
  • AI & Machine Learning: Gemini 2.5 model hardening.
  • deepmind.google: Advancing Gemini's security safeguards.
  • AI GPT Journal: How to Use Gemini Live for Work, Life, and Everything in Between
  • LearnAI: Gemini 2.5: Updates to our family of thinking models. Today we are excited to share updates across the board to our Gemini 2.5 model family.
  • AI News | VentureBeat: Google launches production-ready Gemini 2.5 Pro and Flash AI models for enterprises while introducing cost-efficient Flash-Lite to challenge OpenAI's market dominance.
  • www.laptopmag.com: Google brings Gemini's latest 2.5 Flash and Pro models to audiences, and makes Flash-Lite available for testing.
  • thezvi.substack.com: Google recently came out with Gemini-2.5-0605, to replace Gemini-2.5-0506, because I mean at this point it has to be the companies intentionally fucking with us, right?
  • thezvi.wordpress.com: Google recently came out with Gemini-2.5-0605, to replace Gemini-2.5-0506.
  • AI & Machine Learning: The momentum of the Gemini 2.5 era continues to build.
  • learn.aisingapore.org: This post summarizes the updates to Google Gemini 2.5 models, including the release of Pro and Flash models, and the availability of the Flash-Lite model.
  • TestingCatalog: Google Gemini 2.5 out of preview across AI Studio, Vertex AI, and Gemini app
  • www.techradar.com: Google Gemini’s super-fast Flash-Lite 2.5 model is out now - here’s why you should switch today
  • www.tomsguide.com: Google Gemini 2.5 models are now generally available for developers and researchers.

Unknown (noreply@blogger.com)@Google Workspace Updates //
Google is significantly expanding its AI capabilities across its platforms, impacting software development, advertising, and search functionalities. A major upgrade to Gemini Code Assist now features the Gemini 2.5 Pro model. This enhances the code assistance tool with a 1 million-token context window and custom commands, improving its capacity for more comprehensive code generation, deeper refactoring, and more thorough pull-request reviews. This upgrade allows developers to tailor the assistant's behavior to adhere to internal conventions and reuse prompts, significantly boosting coding task completion rates. Internal studies indicate that teams using the tool are 2.5 times more likely to finish typical coding tasks, with early community benchmarks showing higher accuracy than Copilot on context-heavy queries.

Google is also innovating in the realm of search and content delivery, testing a feature that transforms some search results into AI-generated podcasts. This experimental feature, part of Google Labs, aims to provide users with "quick, conversational audio overviews" of search queries. The feature leverages Google’s Gemini model to research search queries, analyze various third-party websites, and generate a transcript that is then read aloud by two AI-generated hosts. While this new feature offers a convenient way for users to absorb information while multitasking, it raises concerns about potentially diverting traffic away from the original sources of information.

In a groundbreaking move, an AI-generated advertisement created with Google's Veo3 aired during the NBA Finals. This marks a significant milestone in AI-driven content creation, demonstrating the potential for drastic cost reductions in advertising production. The advertisement for the event-betting platform Kalshi was created by AI Filmmaker PJ Accetturo in just three days, resulting in an estimated 95% cost reduction compared to traditional commercial production methods. This showcases a shift towards smaller, more agile creative teams leveraging AI to produce high-volume, brand-relevant content quickly and affordably while highlighting the importance of human skills such as comedy writing and directorial experience in the age of AI.

Recommended read:
References :
  • PCMag Middle East ai: Just Press Play: Google Is Turning Some Search Results Into AI Podcasts
  • www.marktechpost.com: AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%
  • TestingCatalog: Gemini Code Assist gets latest Gemini 2.5 Pro with context management and rules
  • MarkTechPost: AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%
  • www.techradar.com: Discusses the forthcoming Audio Overviews in Google Search, which leverage AI to generate podcast-style explainers of web results.
  • Google DeepMind Blog: Explore the latest Gemini 2.5 model updates with enhanced performance and accuracy: Gemini 2.5 Pro now stable, Flash generally available, and the new Flash-Lite in preview.
  • TestingCatalog: Discover the latest in AI with Google Gemini 2.5's stable release, featuring Pro and Flash models.
  • AI & Machine Learning: Gemini momentum continues with launch of 2.5 Flash-Lite and general availability of 2.5 Flash and Pro on Vertex AI
  • LearnAI: Today we are excited to share updates across the board to our Gemini 2.5 model family: Gemini 2.5 Pro is generally available and stable (no changes from the 06-05 preview) Gemini 2.5 Flash is generally available and stable (no changes from the 05-20 preview, see pricing updates below) Gemini 2.5 Flash-Lite is now available in...
  • www.laptopmag.com: Google brings Gemini's latest 2.5 Flash and Pro models to audiences, and makes Flash-Lite available for testing.
  • thezvi.wordpress.com: Google recently came out with Gemini-2.5-0605, to replace Gemini-2.5-0506, because I mean at this point it has to be the companies intentionally fucking with us, right? Google: 🔔Our updated Gemini 2.5 Pro Preview continues to excel at coding, helping you …
  • thezvi.substack.com: Google recently came out with Gemini-2.5-0605, to replace Gemini-2.5-0506.
  • learn.aisingapore.org: Gemini 2.5 model family expands
  • www.techradar.com: Google Gemini’s super-fast Flash-Lite 2.5 model is out now - here’s why you should switch today
  • learn.aisingapore.org: Today we are excited to share updates across the board to our Gemini 2.5 model family: Gemini 2.5 Pro is generally available and stable (no changes from the 06-05 preview) Gemini 2.5 Flash is generally available and stable (no changes from the 05-20 preview, see pricing updates below) Gemini 2.5 Flash-Lite is now available in...
  • global.techradar.com: Google lancia Gemini Flash-Lite 2.5: il modello IA più veloce di sempre
  • Simon Willison: Blogged too much today and had to send it all out in a newsletter - it's a pretty fun one, covering Gemini 2.5 and Mistral Small 3.2 and the fact that most LLMs will absolutely try and murder you given the chance (and a suitably contrived scenario)

Michael Kan@PCMag Middle East ai //
References: SiliconANGLE , THE DECODER ,
Google is pushing forward with advancements in artificial intelligence across a range of its services. Google DeepMind has developed an AI model that can forecast tropical cyclones with state-of-the-art accuracy, predicting their path and intensity up to 15 days in advance. This model is now being used by the U.S. National Hurricane Center in its official forecasting workflow, marking a significant shift in how these storms are predicted. The AI system learns from decades of historical storm data and can generate 50 different hurricane scenarios, offering a 1.5-day improvement in prediction accuracy compared to traditional models. Google has launched a Weather Lab website to make this AI accessible to researchers, providing historical forecasts and data for comparison.

Google is also experimenting with AI-generated search results in audio format, launching "Audio Overviews" in its Search Labs. Powered by the Gemini language model, this feature delivers quick, conversational summaries for certain search queries. Users can opt into the test and, when available, a play button will appear in Google Search, providing an audio summary alongside relevant websites. The AI researches the query and generates a transcript, read out loud by AI-generated voices, citing its sources. This feature aims to provide a hands-free way to absorb information, particularly for users who are multitasking or prefer audio content.

The introduction of AI-powered features comes amid ongoing debate about the impact on traffic to third-party websites. There are concerns that Google’s AI-driven search results may prioritize its own content over linking to external sources. Some users have also noted instances of Google's AI search summaries spreading incorrect information. Google says it's seen an over 10% increase in usage of Google for the types of queries that show AI Overviews.

Recommended read:
References :
  • SiliconANGLE: Google develops AI model for forecasting tropical cyclones
  • THE DECODER: Google launches Audio Overviews in search results
  • Maginative: Google's AI Can Now Predict Hurricane Paths 15 Days Out — and the Hurricane Center Is Using It

Matt G.@Search Engine Journal //
Google has launched Audio Overviews in Search Labs, introducing a new way for users to consume information hands-free and on-the-go. This experimental feature utilizes Google's Gemini AI models to generate spoken summaries of search results. US users can opt in via Search Labs and, when available, will see an option to create a short audio overview directly on the search results page. The technology aims to provide a convenient method for understanding new topics or multitasking, turning search results into conversational AI podcasts.

Once a user clicks the button to generate the summary, the AI processes information from the Search Engine Results Page (SERP) to create an audio snippet. According to Google, this feature is designed to help users "get a lay of the land" when researching unfamiliar topics. The audio player includes standard controls like play/pause, volume adjustment, and playback speed options. Critically, the audio player also displays links to the websites used in generating the overview, allowing users to delve deeper into specific sources if desired.

While Google emphasizes that Audio Overviews provide links to original sources, concerns remain about the potential impact on website traffic. Some publishers fear that AI-generated summaries might satisfy user intent without them needing to visit the original articles. Google acknowledges the experimental nature of the AI, warning of potential inaccuracies and audio glitches. Users can provide feedback via thumbs-up or thumbs-down ratings, which Google intends to use to refine the feature before broader release. The feature currently works only in English and only for users in the United States.

Recommended read:
References :
  • PCMag Middle East ai: Google pitches Audio Overviews as a 'convenient...way to absorb information,' but it's also another way to kill traffic to the sources of information Google uses to generate these AI podcasts.
  • Search Engine Journal: Google begins experimenting with Audio Overviews in search results. US searchers can opt in to the experiment via Search Labs.
  • The Official Google Blog: A phone screen showing Google search results with a section titled "Search Labs | Audio Overviews" and an audio player.
  • TechCrunch: Google tests Audio Overviews for Search queries
  • blog.google: Get an audio overview of Search results in Labs, then click through to learn more.
  • the-decoder.com: Google launches Audio Overviews in search results
  • www.tomsguide.com: Gemini can turn text into audio overviews — here's how to do it
  • THE DECODER: Google launches Audio Overviews in search results
  • Ars OpenForum: Google begins testing AI-powered Audio Overviews in search results
  • Digital Information World: Google Tests Spoken Summaries in Search Results, But You’ll Have to Ask First
  • www.laptopmag.com: My favorite AI tool just hit Google Search, and it's actually useful — try it yourself
  • www.techradar.com: Here’s why you should be excited about Audio Overviews coming to Google Search

Sana Hassan@MarkTechPost //
Google has recently unveiled significant advancements in artificial intelligence, showcasing its continued leadership in the tech sector. One notable development is an AI model designed for forecasting tropical cyclones. This model, developed through a collaboration between Google Research and DeepMind, is available via the newly launched Weather Lab website. It can predict the path and intensity of hurricanes up to 15 days in advance. The AI system learns from decades of historical storm data, reconstructing past weather conditions from millions of observations and utilizing a specialized database containing key information about storm tracks and intensity.

The tech giant's Weather Lab marks the first time the National Hurricane Center will use experimental AI predictions in its official forecasting workflow. The announcement comes at an opportune time, coinciding with forecasters predicting an above-average Atlantic hurricane season in 2025. This AI model can generate 50 different hurricane scenarios, offering a more comprehensive prediction range than current models, which typically provide forecasts for only 3-5 days. The AI has achieved a 1.5-day improvement in prediction accuracy, equivalent to about a decade's worth of traditional forecasting progress.

Furthermore, Google is experiencing exponential growth in AI usage. Google DeepMind noted that Google's AI usage grew 50 times in one year, reaching 500 trillion tokens per month. Logan Kilpatrick from Google DeepMind discussed Google's transformation from a "sleeping giant" to an AI powerhouse, citing superior compute infrastructure, advanced models like Gemini 2.5 Pro, and a deep talent pool in AI research.

Recommended read:
References :
  • siliconangle.com: Google develops AI model for forecasting tropical cyclones
  • Maginative: Google's AI Can Now Predict Hurricane Paths 15 Days Out — and the Hurricane Center Is Using It

@www.analyticsvidhya.com //
Google has made significant strides in the realm of artificial intelligence, introducing several key advancements. One notable development is the launch of Edge Gallery, an application designed to enable the execution of Large Language Models (LLMs) directly on smartphones. This groundbreaking app eliminates the need for cloud dependency, offering users free access to powerful AI processing capabilities on their personal devices. By shifting processing to the edge, Edge Gallery empowers users with greater control over their data and ensures enhanced privacy and security.

The company has also quietly upgraded Gemini 2.5 Pro, Google's flagship LLM, to boost its performance across coding, reasoning, and response quality. This upgrade addresses prior feedback regarding the model's tone, resulting in more structured and creative outputs. In addition to enhancing its core AI models, Google is expanding access to Project Mariner, an AI-driven browser assistant, to more Ultra subscribers. Project Mariner is designed to interact with the user’s active Chrome tabs via a dedicated extension, enabling it to query and manipulate information from any open webpage.

Furthermore, Google has introduced an open-source full-stack AI agent stack powered by Gemini 2.5 and LangGraph, designed for multi-step web search, reflection, and synthesis. This research agent stack allows AI agents to perform autonomous web searches, validate results, and refine responses, effectively mimicking a human research assistant. This initiative underscores Google's commitment to fostering innovation and collaboration within the AI community by making advanced tools and resources freely available.

Recommended read:
References :
  • Analytics Vidhya: Run LLMs Locally for Free Using Google’s Latest App!
  • Maginative: Google Just Quietly Upgrated Gemini 2.5 Pro
  • www.marktechpost.com: Google Introduces Open-Source Full-Stack AI Agent Stack Using Gemini 2.5 and LangGraph for Multi-Step Web Search, Reflection, and Synthesis
  • TestingCatalog: Google expands Project Mariner access to more Ultra subscribers

Ruben Circelli@PCMag Middle East ai //
References: PCMag Middle East ai
Google is making significant strides in the realm of artificial intelligence with advancements in both video generation and browser assistance. The company's new Veo 3 AI video generator is capable of creating realistic videos from simple text prompts, marking a potentially revolutionary step in generative AI technology. Furthermore, Google is expanding access to Project Mariner, its AI-driven browser assistant, to a wider audience of Ultra plan subscribers, bringing more advanced features to users seeking enhanced web navigation and automation. These developments highlight Google's continued investment in and exploration of AI-powered tools designed to improve productivity and user experience.

The introduction of Veo 3 has sparked both excitement and concern. While the technology is undeniably impressive, with the ability to render finely detailed objects and create realistic audio, it also raises serious questions about the future of authenticity online. The potential for misuse, including the creation of deepfakes, online harassment, and the spread of misinformation, is significant. Experts worry that combining Veo 3's capabilities with weak content restrictions could lead to a catastrophic erosion of truth on the internet, especially once the ability to upload images for video generation is added. The implications of easily creating lifelike videos of individuals saying or doing things they never would are profound and potentially damaging.

In other AI developments, Google is rolling out Project Mariner to more Ultra plan subscribers, positioning it as a browser agent that interacts with open Chrome tabs via a dedicated extension. This allows Mariner to query and manipulate information from webpages, similar to other agent browsers. Users can instruct Mariner through a prompt bar, enabling tasks such as web navigation, hotel booking, and automated searches. However, the tool's frequent permission requests have led to feedback that it can be slow and requires significant manual oversight, limiting its autonomous value. While Google sees Project Mariner as a long-term bet within its AI-powered productivity suite, the immediate benefits may be overshadowed by its limitations.

Recommended read:
References :
  • PCMag Middle East ai: Combining instant photorealistic videos with Google's weak content restrictions is more than a recipe for disaster. It could mean the end of authenticity online forever.

Mark Tyson@tomshardware.com //
OpenAI has recently launched its newest reasoning model, o3-pro, making it available to ChatGPT Pro and Team subscribers, as well as through OpenAI’s API. Enterprise and Edu subscribers will gain access the following week. The company touts o3-pro as a significant upgrade, emphasizing its enhanced capabilities in mathematics, science, and coding, and its improved ability to utilize external tools.

OpenAI has also slashed the price of o3 by 80% and o3-pro by 87%, positioning the model as a more accessible option for developers seeking advanced reasoning capabilities. This price adjustment comes at a time when AI providers are competing more aggressively on both performance and affordability. Experts note that evaluations consistently prefer o3-pro over the standard o3 model across all categories, especially in science, programming, and business tasks.

O3-pro utilizes the same underlying architecture as o3, but it’s tuned to be more reliable, especially on complex tasks, with better long-range reasoning. The model supports tools like web browsing, code execution, vision analysis, and memory. While the increased complexity can lead to slower response times, OpenAI suggests that the tradeoff is worthwhile for the most challenging questions "where reliability matters more than speed, and waiting a few minutes is worth the tradeoff.”

Recommended read:
References :
  • Maginative: OpenAI’s new o3-pro model is now available in ChatGPT and the API, offering top-tier performance in math, science, and coding—at a dramatically lower price.
  • AI News | VentureBeat: OpenAI's most powerful reasoning model, o3, is now 80% cheaper, making it more affordable for businesses, researchers, and individual developers.
  • Latent.Space: OpenAI just dropped the price of their o3 model by 80% today and launched o3-pro.
  • THE DECODER: OpenAI has lowered the price of its o3 language model by 80 percent, CEO Sam Altman said.
  • Simon Willison's Weblog: OpenAI's Adam Groth explained that the engineers have optimized inference, allowing a significant price reduction for the o3 model.
  • the-decoder.com: OpenAI lowered the price of its o3 language model by 80 percent, CEO Sam Altman said.
  • AI News | VentureBeat: OpenAI released the latest in its o-series of reasoning model that promises more reliable and accurate responses for enterprises.
  • bsky.app: The OpenAI API is back to running at 100% again, plus we dropped o3 prices by 80% and launched o3-pro - enjoy!
  • Sam Altman: We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.
  • siliconangle.com: OpenAI’s newest reasoning model o3-pro surpasses rivals on multiple benchmarks, but it’s not very fast
  • SiliconANGLE: OpenAI’s newest reasoning model o3-pro surpasses rivals on multiple benchmarks, but it’s not very fast
  • bsky.app: the OpenAI API is back to running at 100% again, plus we dropped o3 prices by 80% and launched o3-pro - enjoy!
  • bsky.app: OpenAI has launched o3-pro. The new model is available to ChatGPT Pro and Team subscribers and in OpenAI’s API now, while Enterprise and Edu subscribers will get access next week. If you use reasoning models like o1 or o3, try o3-pro, which is much smarter and better at using external tools.
  • The Algorithmic Bridge: OpenAI o3-Pro Is So Good That I Can’t Tell How Good It Is

@www.cnbc.com //
OpenAI has reached a significant milestone, achieving $10 billion in annual recurring revenue (ARR). This surge in revenue is primarily driven by the popularity and adoption of its ChatGPT chatbot, along with its suite of business products and API services. The ARR figure excludes licensing revenue from Microsoft and other large one-time deals. This achievement comes roughly two and a half years after the initial launch of ChatGPT, demonstrating the rapid growth and commercial success of OpenAI's AI technologies.

Despite the financial success, OpenAI is also grappling with the complexities of AI safety and responsible use. Concerns have been raised about the potential for AI models to generate malicious content and be exploited for cyberattacks. The company is actively working to address these issues, including clamping down on ChatGPT accounts linked to state-sponsored cyberattacks. Furthermore, the company will now retain deleted ChatGPT conversations to comply with a court order.

In related news, a security vulnerability was discovered in Google Accounts, potentially exposing users to phishing and SIM-swapping attacks. The vulnerability allowed researchers to brute-force any Google account's recovery phone number by knowing their profile name and an easily retrieved partial phone number. Google has since patched the bug. Separately, OpenAI is facing a court order to retain deleted ChatGPT conversations in connection with a copyright lawsuit filed by The New York Times, who allege that OpenAI used their content without permission. The company plans to appeal the ruling, ensuring that data will be stored separately in a secure system and only be accessed to meet legal obligations.

Recommended read:
References :
  • SiliconANGLE: OpenAI reaches $10B in annual recurring revenue as ChatGPT adoption accelerates
  • Simon Willison's Weblog: OpenAI hits $10 billion in annual recurring revenue fueled by ChatGPT growth
  • TechCrunch: OpenAI claims to have hit $10B in annual revenue
  • www.cnbc.com: OpenAI hits $10 billion in annual recurring revenue fueled by ChatGPT growth.
  • www.digitimes.com: OpenAI annualized revenue doubles to US$10B, eyes profitability by 2029
  • www.bleepingcomputer.com: A vulnerability allowed researchers to brute-force any Google account's recovery phone number simply by knowing a their profile name and an easily retrieved partial phone number, creating a massive risk for phishing and SIM-swapping attacks.
  • siliconangle.com: OpenAI reaches $10B in annual recurring revenue as ChatGPT adoption accelerates
  • Analytics India Magazine: AI news updates—OpenAI’s Annual Revenue Touches $10 Billion, Up 81.8% From Last Year
  • Dataconomy: OpenAI confirms $10B annual revenue milestone
  • The Tech Portal: OpenAI doubles annual revenue to $10Bn from $5.5Bn in December 2024
  • analyticsindiamag.com: OpenAI’s Annual Revenue Touches $10 Billion, Up 81.8% From Last Year
  • Quartz: OpenAI says it's making $10 billion in annual recurring revenue as ChatGPT grows

@github.com //
Google is enhancing its AI Hypercomputer with optimized recipes designed to streamline the deployment of large AI models like Meta's Llama4 and DeepSeek. This move aims to alleviate the resource-intensive challenges faced by developers and ML engineers when working with these advanced models. The new recipes will facilitate the use of Llama4 Scout and Maverick models, as well as DeepSeek models, on Google Cloud Trillium TPUs and A3 Mega/Ultra GPUs, making these powerful AI tools more accessible and efficient to deploy.

JetStream, Google’s high-throughput inference engine for LLMs on XLA devices, now supports Llama-4-Scout-17B-16E and Llama-4-Maverick-17B-128E inference on Trillium TPUs. New recipes provide steps to deploy these models using JetStream and MaxText on a Trillium TPU GKE cluster. Pathways on Google Cloud simplifies large-scale machine learning computations by enabling a single JAX client to orchestrate workloads across multiple large TPU slices. MaxText now features reference implementations for Llama4 and DeepSeek, offering detailed guidance on checkpoint conversion, training, and decoding processes.

Developers can find these new recipes and resources on the AI Hypercomputer GitHub repository. These optimized recipes promise to simplify the deployment and resource management of Llama4 and DeepSeek models, enabling users to harness the full potential of these advanced AI technologies on Google Cloud's AI Hypercomputer platform. This initiative underscores Google's commitment to providing a robust AI infrastructure and fostering innovation in the open-source AI community.

Recommended read:
References :
  • AI & Machine Learning: Accelerate your gen AI: Deploy Llama4 & DeepSeek on AI Hypercomputer with new recipes
  • github.com: GitHub repository containing TPU recipes for deploying Llama-4-Scout-17B-16E.
  • github.com: GitHub - AI-Hypercomputer/maxtext: High throughput and scalable foundation model training.

Alexey Shabanov@TestingCatalog //
Google is aggressively enhancing its Gemini platform with a suite of new features, including the integration of Imagen 4 for improved image generation, expanded Canvas capabilities, and a dedicated Enterprise mode. The Enterprise mode introduces a toggle to separate professional and personal workflows, providing business users with clearer boundaries and better data governance. Gemini is also gaining the ability to generate content from uploaded images, indicating a more creator-focused approach to multimodal generation. These additions aim to make Gemini a more comprehensive and versatile workspace for generative AI tasks.

Gemini's Canvas, a workspace for organizing and presenting ideas, is also receiving a significant upgrade. Users will soon be able to auto-generate infographics, timelines, mindmaps, full presentations, and even web pages directly within the platform. One particularly notable feature in development is the ability for users to describe their applications, prompting Gemini to automatically build UI visualizations for the underlying data. These updates demonstrate Google's strategy of bundling a broad set of creative tools for both individuals and organizations, continuously iterating on functionality to stay competitive.

The new Gemini 2.5 Pro model is out, the company claims it is superior in coding and math, and is accessible via Google AI Studio and Vertex AI. Google claims the Gemini 2.5 Pro preview beats DeepSeek R1 and Grok 3 Beta in coding performance, with performance metrics showing the new version of Gemini 2.5 Pro improved by 24 points in LMArena and by 35 points in WebDevArena, where it currently tops the leaderboard. This model is priced at $1.25 per million tokens without caching for inputs and $10 for the output price. It’s better at coding, reasoning, science + math, shows improved performance across key benchmarks.

Recommended read:
References :
  • TestingCatalog: Google to bring Canvas upgrades, image-to-video and Enterprise mode to Gemini
  • siliconangle.com: Google revamps Gemini 2.5 Pro again, claiming superiority in coding and math
  • the-decoder.com: Google rolls out new features for AI Mode and Gemini app

Amanda Caswell@Latest from Tom's Guide //
Google has introduced "Scheduled Actions" to its Gemini app, a feature aimed at enhancing user productivity by automating tasks. This new capability, announced during Google I/O and now rolling out to select Android and iOS users, allows Gemini to handle recurring or time-specific tasks without repeated prompts. Users can instruct Gemini to perform actions such as generating weekly blog brainstorms, delivering daily news digests, or setting one-time event reminders. With Scheduled Actions, Gemini is evolving to become a more proactive AI assistant, providing users with a hands-off experience.

The Scheduled Actions feature enables users to automate prompts within the Gemini app. Examples include setting up a daily calendar and email summary, receiving blog post ideas on a recurring schedule, or getting reminders for specific appointments. Once a task is scheduled, it can be easily managed from the Scheduled Actions page within the Gemini settings. This functionality positions Gemini as a more competitive alternative to AI technologies with similar features, such as ChatGPT, by offering a personalized experience to help users "stay informed, inspired, and on track."

Google is also expanding its AI capabilities in other areas. AI Mode in Google Search now displays charts and tables, particularly for finance-related queries drawing data from Google Finance. Additionally, users with Google AI Pro, Ultra, or some Workspace plans can use voice commands to set "scheduled actions" within the Gemini app. These scheduled tasks are automatically integrated into Google Calendar or Gmail. This new feature offers a more comprehensive AI experience.

Recommended read:
References :
  • The Official Google Blog: Plan ahead with scheduled actions in the Gemini app.
  • THE DECODER: Google rolls out new features for AI Mode and Gemini app
  • www.tomsguide.com: Google has just rolled out a new Gemini feature to select users that allows you to schedule actions, which could be a game-changer for this AI tech. Here's how it works.
  • PCMag Middle East ai: Need Help Getting Organized? You Can Now Schedule Actions in Google Gemini
  • the-decoder.com: Google rolls out new features for AI Mode and Gemini app
  • blog.google: Plan ahead with scheduled actions in the Gemini app.
  • Gadgets 360: Gemini App Is Getting a New Scheduled Actions Feature on iOS and Android
  • Mashable India tech: Google Gemini’s New Tool Brings It Closer to ChatGPT’s Assistant Capabilities
  • www.zdnet.com: Google Gemini will let you schedule recurring tasks now, like ChatGPT - here's how
  • Maginative: Google Just Quietly Upgrated Gemini 2.5 Pro
  • www.techradar.com: Gemini's new Scheduled Actions feature puts catching up with ChatGPT on its dayplanner

@cloud.google.com //
References: AI & Machine Learning
Alpian, a pioneering Swiss private bank, is revolutionizing the financial services industry by integrating Google's generative AI into its core operations. As the first fully cloud-native private bank in Switzerland, Alpian is embracing digital innovation to offer a seamless and high-value banking experience, balancing personal wealth management with digital convenience. This strategic move positions Alpian at the forefront of the digital age, setting a new benchmark for agility, scalability, and compliance capabilities within the tightly regulated Swiss financial landscape. Alpian's partnership with Google, leveraging tools like Gemini, enables developers to interact with infrastructure through simple conversational commands, significantly reducing deployment times.

Alpian faced the challenge of innovating within the strict regulatory environment of the Swiss banking system, overseen by FINMA. The integration of generative AI required meticulous attention to compliance and security. By implementing a platform that utilizes generative AI, Alpian has created a defined scope where engineers can autonomously interact with IT elements using a simplified conversational interface. This approach allows teams to focus on innovation rather than repetitive tasks, accelerating deployment times from days to hours and empowering them to develop cutting-edge services while adhering to stringent compliance standards.

The benefits of this generative AI integration extend beyond internal workflows, directly enhancing the client experience. Faster deployment times translate into quicker access to new features, such as tailored wealth management tools and enhanced security measures. Furthermore, Google’s NotebookLM, which now allows users to publicly share notebooks with a link, can be used to provide clients with AI-generated research summaries or briefing documents. This initiative not only optimizes internal operations but also establishes a new benchmark for operational excellence in the banking sector, showcasing the transformative potential of AI in redefining private banking for the 21st century.

Recommended read:
References :

Emilia David@AI News | VentureBeat //
Google's Gemini 2.5 Pro is making waves in the AI landscape, with claims of superior coding performance compared to leading models like DeepSeek R1 and Grok 3 Beta. The updated Gemini 2.5 Pro, currently in preview, is touted to deliver faster and more creative responses, particularly in coding and reasoning tasks. Google highlighted improvements across key benchmarks such as AIDER Polyglot, GPQA, and HLE, noting a significant Elo score jump since the previous version. This newest iteration, referred to as Gemini 2.5 Pro Preview 06-05, builds upon the I/O edition released earlier in May, promising even better performance and enterprise-scale capabilities.

Google is also planning several enhancements to the Gemini platform. These include upgrades to Canvas, Gemini’s workspace for organizing and presenting ideas, adding the ability to auto-generate infographics, timelines, mindmaps, full presentations, and web pages. There are also plans to integrate Imagen 4, which enhances image generation capabilities, image-to-video functionality, and an Enterprise mode, which offers a dedicated toggle to separate professional and personal workflows. This Enterprise mode aims to provide business users with clearer boundaries and improved data governance within the platform.

In addition to its coding prowess, Gemini 2.5 Pro boasts native audio capabilities, enabling developers to build richer and more interactive applications. Google emphasizes its proactive approach to safety and responsibility, embedding SynthID watermarking technology in all audio outputs to ensure transparency and identifiability of AI-generated audio. Developers can explore these native audio features through the Gemini API in Google AI Studio or Vertex AI, experimenting with audio dialog and controllable speech generation. Google DeepMind is also exploring ways for AI to take over mundane email chores, with CEO Demis Hassabis envisioning an AI assistant capable of sorting, organizing, and responding to emails in a user's own voice and style.

Recommended read:
References :
  • AI News | VentureBeat: Google claims Gemini 2.5 Pro preview beats DeepSeek R1 and Grok 3 Beta in coding performance
  • learn.aisingapore.org: Gemini 2.5’s native audio capabilities
  • Kyle Wiggers ?: Google says its updated Gemini 2.5 Pro AI model is better at coding
  • www.techradar.com: Google upgrades Gemini 2.5 Pro's already formidable coding abilities
  • SiliconANGLE: Google revamps Gemini 2.5 Pro again, claiming superiority in coding and math
  • siliconangle.com: SiliconAngle reports on Google's release of an updated Gemini 2.5 Pro model, highlighting its claimed superiority in coding and math.

Tulsee Doshi@The Official Google Blog //
Google has launched an upgraded preview of Gemini 2.5 Pro, touting it as their most intelligent model yet. Building upon the version revealed in May, this updated AI demonstrates significant improvements in coding capabilities. One striking example of its advanced functionality is its ability to generate intricate images, such as a "pretty solid pelican riding a bicycle."

The model's enhanced coding proficiency is further highlighted by its ethical safeguards. When prompted to run SnitchBench, a tool designed to test the ethical boundaries of AI models, Gemini 2.5 Pro notably "tipped off both the feds and the WSJ and NYTimes." This self-awareness and alert system underscore the advancements in AI safety protocols integrated into the new model.

The rapid development and release of Gemini 2.5 Pro reflect Google's increasing confidence in its AI technology. The company emphasizes that this iteration offers substantial improvements over its predecessors, solidifying its position as a leading AI model. Developers and enthusiasts alike are encouraged to try the latest Gemini 2.5 Pro before its general release to experience its improved capabilities firsthand.

Recommended read:
References :
  • Kyle Wiggers ?: Google says its updated Gemini 2.5 Pro AI model is better at coding
  • The Official Google Blog: We’re introducing an upgraded preview of Gemini 2.5 Pro, our most intelligent model yet. Building on the version we released in May and showed at I/O, this model will be…
  • THE DECODER: Google has rolled out another update to its flagship AI model, Gemini 2.5 Pro. The latest version brings modest improvements across a range of benchmarks and maintains top positions on tests like LMArena and WebDevArena The article appeared first on .
  • www.zdnet.com: The flagship model's rapid evolution reflects Google's growing confidence in its AI offerings.
  • bsky.app: New Gemini 2.5 Pro is out - gemini-2.5-pro-preview-06-05 It made me a pretty solid pelican riding a bicycle, AND it tipped off both the feds and the WSJ and NYTimes when I tried running SnitchBench against it https://simonwillison.net/2025/Jun/5/gemini-25-pro-preview-06-05/
  • Simon Willison: New Gemini 2.5 Pro is out - gemini-2.5-pro-preview-06-05 It made me a pretty solid pelican riding a bicycle, AND it tipped off both the feds and the WSJ and NYTimes when I tried running SnitchBench against it
  • AI News | VentureBeat: Google claims Gemini 2.5 Pro preview beats DeepSeek R1 and Grok 3 Beta in coding performance
  • www.techradar.com: Google upgrades Gemini 2.5 Pro's already formidable coding abilities
  • SiliconANGLE: Google revamps Gemini 2.5 Pro again, claiming superiority in coding and math
  • siliconangle.com: Google revamps Gemini 2.5 Pro again, claiming superiority in coding and math
  • the-decoder.com: Google Rolls Out Modest Improvements to Gemini 2.5 Pro
  • www.marktechpost.com: Google Introduces Open-Source Full-Stack AI Agent Stack Using Gemini 2.5 and LangGraph for Multi-Step Web Search, Reflection, and Synthesis
  • Maginative: Maginative article about how Google quietly upgraded Gemini 2.5 Pro.
  • Stack Overflow Blog: Ryan and Ben welcome Tulsee Doshi and Logan Kilpatrick from Google's DeepMind to discuss the advanced capabilities of the new Gemini 2.5, the importance of feedback loops for model improvement and reducing hallucinations, the necessity of great data for advancements, and enhancing developer experience through tool integration.

@workspaceupdates.googleblog.com //
Google is significantly expanding the integration of its Gemini AI model across the Google Workspace suite. A key focus is enhancing Google Chat with AI-powered features designed to improve user efficiency and productivity. One notable addition is the ability for Gemini to provide summaries of unread conversations directly within the Chat home view. This feature, which initially launched last year, has been expanded to support four additional languages: French, Italian, Japanese, and Korean, making it more accessible to a global user base. Users can activate the "Summarize" button upon navigating to a conversation to receive a quick, bulleted synopsis of the message content, allowing for rapid review of recent activity and prioritization of important conversations.

The new summaries in home feature in Google Chat is aimed at streamlining the user experience and helping users find what they need faster. It works by leveraging Gemini's ability to quickly process and condense information, providing users with a concise overview of their active conversations. To access these summaries, users need to ensure that smart features and personalization are turned on in their Google Workspace settings. This can be managed by administrators in the Admin console, or by individual users through their personal settings. The rollout of this feature is gradual, with both Rapid Release and Scheduled Release domains experiencing visibility within a 15-day period starting May 30, 2025.

Google is also exploring the potential of AI to revolutionize email management. Demis Hassabis, head of Google DeepMind, has expressed a desire to develop a "next-generation email" system that can intelligently sort through inboxes, respond to routine emails in a user's personal style, and automate simpler decisions. This initiative aims to alleviate the "tyranny of the email inbox" and free up users' time for more important tasks. Hassabis envisions an AI assistant that not only manages emails but also protects users' attention from other algorithms competing for their focus, ultimately serving the individual and enriching their life.

Recommended read:
References :
  • Google Workspace Updates: Preview summaries in the Google Chat home view with the help of Gemini in four additional languages
  • www.tomsguide.com: This article discusses the integration of Google's Gemini AI with Gmail and highlights a key discovery that surprises the author.
  • www.theguardian.com: Google working on AI email tool that can ‘answer in your style’

@cloud.google.com //
Google is pushing forward with AI integration across its cloud services and applications, introducing new capabilities to empower developers and enhance user experiences. A key development is the announcement of new MCP (Model Context Protocol) integrations to Google Cloud Databases. This aims to enable AI-assisted development by making it easier to connect databases to AI assistants within Integrated Development Environments (IDEs).

This new MCP integration leverages the MCP Toolbox for Databases, an open-source server that facilitates connections between generative AI agents and enterprise data. Supporting various databases including BigQuery, AlloyDB, Cloud SQL, Spanner, and even self-managed options like PostgreSQL and MySQL, the Toolbox allows developers to use MCP-compatible AI assistants to write code, design schemas, refactor code, generate testing data, and explore databases. Pre-built tools within the Toolbox streamline integration with Google Cloud databases directly from within preferred IDEs like VSCode, Claude Code, and Cursor.

Furthering its AI integration efforts, Google has made NotebookLM shareable via a simple link. NotebookLM, an AI-powered research notebook, allows users to compile research, summarize PDFs, and generate FAQs from uploaded source material. This update enables users to share their AI-generated content, such as summaries and briefing documents, with others, fostering collaboration and knowledge sharing. Viewers can interact with the AI-generated content by asking questions, though they cannot modify the source material. This new feature enhances NotebookLM's utility for creating study guides, product documentation, and strategic frameworks in a semi-interactive format.

Recommended read:
References :
  • AI & Machine Learning: Announcing new MCP integrations to Google Cloud Databases to enable AI-assisted development
  • Maginative: Google’s NotebookLM Now Lets You Share AI-Powered Notebooks With a Link

Chris McKay@Maginative //
Google's AI research notebook, NotebookLM, has introduced a significant upgrade that enhances collaboration by allowing users to publicly share their AI-powered notebooks with a simple link. This new feature, called Public Notebooks, enables users to share their research summaries and audio overviews generated by AI with anyone, without requiring sign-in or permissions. This move aims to transform NotebookLM from a personal research tool into an interactive, AI-powered knowledge hub, facilitating easier distribution of study guides, project briefs, and more.

The public sharing feature provides viewers with the ability to interact with AI-generated content like FAQs and overviews, as well as ask questions in chat. However, they cannot edit the original sources, ensuring the preservation of ownership while enabling discovery. To share a notebook, users can click the "Share" button, switch the setting to "Anyone with the link," and copy the link. This streamlined process is similar to sharing Google Docs, making it intuitive and accessible for users.

This upgrade is particularly beneficial for educators, startups, and nonprofits. Teachers can share curated curriculum summaries, startups can distribute product manuals, and nonprofits can publish donor briefing documents without the need to build a dedicated website. By enabling easier sharing of AI-generated notes and audio overviews, Google is demonstrating how generative AI can be integrated into everyday productivity workflows, making NotebookLM a more grounded tool for sense-making of complex material.

Recommended read:
References :
  • Maginative: Google’s NotebookLM Now Lets You Share AI-Powered Notebooks With a Link
  • The Official Google Blog: NotebookLM is adding a new way to share your own notebooks publicly.
  • PCMag Middle East ai: Google Makes It Easier to Share Your NotebookLM Docs, AI Podcasts
  • AI & Machine Learning: How Alpian is redefining private banking for the digital age with gen AI
  • venturebeat.com: Google quietly launches AI Edge Gallery, letting Android phones run AI without the cloud
  • TestingCatalog: Google’s Kingfall model briefly goes live on AI Studio before lockdown
  • shellypalmer.com: NotebookLM, one of Google's most viral AI products, just got a really useful upgrade: users can now publicly share notebooks with a link.