News from the AI & ML world

DeeperML - #search

Ellie Ramirez-Camara@Data Phoenix //
Google has recently launched an experimental feature that leverages its Gemini models to create short audio overviews for certain search queries. This new feature aims to provide users with an audio format option for grasping the basics of unfamiliar topics, particularly beneficial for multitasking or those who prefer auditory learning. Users who participate in the experiment will see the option to generate an audio overview on the search results page, which Google determines would benefit from this format.

When an audio overview is ready, it will be presented to the user with an audio player that offers basic controls such as volume, playback speed, and play/pause buttons. Significantly, the audio player also displays relevant web pages, allowing users to easily access more in-depth information on the topic being discussed in the overview. This feature builds upon Google's earlier work with audio overviews in NotebookLM and Gemini, where it allowed for the creation of podcast-style discussions and audio summaries from provided sources.

Google is also experimenting with a new feature called Search Live, which enables users to have real-time verbal conversations with Google’s Search tools, providing interactive responses. This Gemini-powered AI simulates a friendly and knowledgeable human, inviting users to literally talk to their search bar. The AI doesn't stop listening after just one question but rather engages in a full dialogue, functioning in the background even when the user leaves the app. Google refers to this system as “query fan-out,” which means that instead of just answering your question, it also quietly considers related queries, drawing in more diverse sources and perspectives.

Additionally, Gemini on Android can now identify songs, similar to the functionality previously offered by Google Assistant. Users can ask Gemini, “What song is this?” and the chatbot will trigger Google’s Song Search interface, which can recognize music from the environment, a playlist, or even if the user hums the tune. However, unlike the seamless integration of Google Assistant’s Now Playing feature, this song identification process is not fully native to Gemini. When initiated, it launches a full-screen listening interface from the Google app, which feels a bit clunky and doesn't stay within Gemini Live’s conversational experience.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • Data Phoenix: Google's newest experiment brings short audio overviews to some Search queries
  • the-decoder.com: Google is rolling out a new feature called Audio Overviews in its Search Labs. The article appeared first on .
  • thetechbasic.com: Google has begun rolling out Search Live in AI Mode for its Android and iOS apps in the United States. This new feature invites users to speak naturally and receive real‑time, spoken answers powered by a custom version of Google’s Gemini model. Search Live combines the conversational strengths of Gemini with the full breadth of […] The post first appeared on .
  • chromeunboxed.com: The transition from Google Assistant to Gemini, while exciting in many ways, has come with a few frustrating growing pains. As Gemini gets smarter with complex tasks, we’ve sometimes lost the simple, everyday features we relied on with Assistant.
  • Latest news: Your Android phone just got a major Gemini upgrade for music fans - and it's free
Classification:
Matt G.@Search Engine Journal //
Google has launched Audio Overviews in Search Labs, introducing a new way for users to consume information hands-free and on-the-go. This experimental feature utilizes Google's Gemini AI models to generate spoken summaries of search results. US users can opt in via Search Labs and, when available, will see an option to create a short audio overview directly on the search results page. The technology aims to provide a convenient method for understanding new topics or multitasking, turning search results into conversational AI podcasts.

Once a user clicks the button to generate the summary, the AI processes information from the Search Engine Results Page (SERP) to create an audio snippet. According to Google, this feature is designed to help users "get a lay of the land" when researching unfamiliar topics. The audio player includes standard controls like play/pause, volume adjustment, and playback speed options. Critically, the audio player also displays links to the websites used in generating the overview, allowing users to delve deeper into specific sources if desired.

While Google emphasizes that Audio Overviews provide links to original sources, concerns remain about the potential impact on website traffic. Some publishers fear that AI-generated summaries might satisfy user intent without them needing to visit the original articles. Google acknowledges the experimental nature of the AI, warning of potential inaccuracies and audio glitches. Users can provide feedback via thumbs-up or thumbs-down ratings, which Google intends to use to refine the feature before broader release. The feature currently works only in English and only for users in the United States.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • PCMag Middle East ai: Google pitches Audio Overviews as a 'convenient...way to absorb information,' but it's also another way to kill traffic to the sources of information Google uses to generate these AI podcasts.
  • Search Engine Journal: Google begins experimenting with Audio Overviews in search results. US searchers can opt in to the experiment via Search Labs.
  • The Official Google Blog: A phone screen showing Google search results with a section titled "Search Labs | Audio Overviews" and an audio player.
  • TechCrunch: Google tests Audio Overviews for Search queries
  • blog.google: Get an audio overview of Search results in Labs, then click through to learn more.
  • the-decoder.com: Google launches Audio Overviews in search results
  • www.tomsguide.com: Gemini can turn text into audio overviews — here's how to do it
  • THE DECODER: Google launches Audio Overviews in search results
  • Ars OpenForum: Google begins testing AI-powered Audio Overviews in search results
  • Digital Information World: Google Tests Spoken Summaries in Search Results, But You’ll Have to Ask First
  • www.laptopmag.com: My favorite AI tool just hit Google Search, and it's actually useful — try it yourself
  • www.techradar.com: Here’s why you should be excited about Audio Overviews coming to Google Search
Classification:
  • HashTags: #AI #Google #AudioOverviews
  • Company: Google
  • Target: Search Users
  • Product: Search
  • Feature: Audio AI Overviews
  • Type: AI
  • Severity: Informative
@www.analyticsvidhya.com //
Google is rapidly evolving its search capabilities with the introduction of AI Mode, powered by the Gemini 2.5 model. This new mode aims to transform how users interact with the web, moving beyond traditional search results to provide more comprehensive and AI-driven experiences. AI Mode, which launched for all Google users recently, includes AI Overview, Deep Search, Search Live, and agentic capabilities through Project Mariner, signifying a fundamental shift in Google's approach to search.

The core distinction lies between AI Overview and AI Mode. AI Overview, initially introduced in May 2024, uses AI to summarize information from top search results, presenting it in concise paragraphs. The new AI Mode builds upon this by integrating advanced features and utilizing the more powerful Gemini 2.5 model. This upgrade enhances the accuracy and depth of the summaries and opens the door to more interactive and dynamic search experiences.

Beyond search, Google is also revolutionizing content creation with its upgraded Canvas, powered by Gemini 2.5 Pro. Canvas allows users to transform ideas into apps, quizzes, podcasts, and visuals without needing any code. This "vibe-coding" capability enables users to create functional applications through natural conversation with the AI, significantly lowering the barrier to software development. The new Canvas is accessible to all Gemini users, with Pro and Ultra subscribers gaining access to the Gemini 2.5 Pro model and a larger context window, making it easier than ever to prototype and share interactive content.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • Analytics Vidhya: Google Search’s Two New AI Features: AI Overview and AI Mode
  • John Werner: Google’s New AI Mode Threatens The Traditional Internet
  • www.laptopmag.com: What is Google's AI Mode, and how will it change search forever?
Classification: