News from the AI & ML world
Ellie Ramirez-Camara@Data Phoenix
//
Google has recently launched an experimental feature that leverages its Gemini models to create short audio overviews for certain search queries. This new feature aims to provide users with an audio format option for grasping the basics of unfamiliar topics, particularly beneficial for multitasking or those who prefer auditory learning. Users who participate in the experiment will see the option to generate an audio overview on the search results page, which Google determines would benefit from this format.
When an audio overview is ready, it will be presented to the user with an audio player that offers basic controls such as volume, playback speed, and play/pause buttons. Significantly, the audio player also displays relevant web pages, allowing users to easily access more in-depth information on the topic being discussed in the overview. This feature builds upon Google's earlier work with audio overviews in NotebookLM and Gemini, where it allowed for the creation of podcast-style discussions and audio summaries from provided sources.
Google is also experimenting with a new feature called Search Live, which enables users to have real-time verbal conversations with Google’s Search tools, providing interactive responses. This Gemini-powered AI simulates a friendly and knowledgeable human, inviting users to literally talk to their search bar. The AI doesn't stop listening after just one question but rather engages in a full dialogue, functioning in the background even when the user leaves the app. Google refers to this system as “query fan-out,” which means that instead of just answering your question, it also quietly considers related queries, drawing in more diverse sources and perspectives.
Additionally, Gemini on Android can now identify songs, similar to the functionality previously offered by Google Assistant. Users can ask Gemini, “What song is this?” and the chatbot will trigger Google’s Song Search interface, which can recognize music from the environment, a playlist, or even if the user hums the tune. However, unlike the seamless integration of Google Assistant’s Now Playing feature, this song identification process is not fully native to Gemini. When initiated, it launches a full-screen listening interface from the Google app, which feels a bit clunky and doesn't stay within Gemini Live’s conversational experience.
ImgSrc: dataphoenix.inf
References :
- Data Phoenix: Google's newest experiment brings short audio overviews to some Search queries
- the-decoder.com: Google is rolling out a new feature called Audio Overviews in its Search Labs. The article appeared first on .
- thetechbasic.com: Google has begun rolling out Search Live in AI Mode for its Android and iOS apps in the United States. This new feature invites users to speak naturally and receive real‑time, spoken answers powered by a custom version of Google’s Gemini model. Search Live combines the conversational strengths of Gemini with the full breadth of […] The post first appeared on .
- chromeunboxed.com: The transition from Google Assistant to Gemini, while exciting in many ways, has come with a few frustrating growing pains. As Gemini gets smarter with complex tasks, we’ve sometimes lost the simple, everyday features we relied on with Assistant.
- www.zdnet.com: Your Android phone just got a major Gemini upgrade for music fans - and it's free
Classification: