News from the AI & ML world

DeeperML - #googlesearch

@learn.aisingapore.org //
Google is significantly enhancing its search capabilities through deeper integration of artificial intelligence. Google Search Console will now display data related to AI Mode performance, offering insights into how AI impacts search visibility, although detailed breakdowns will not be available. These changes reflect Google's ongoing efforts to incorporate AI into various aspects of its platform, aiming to provide users with more advanced and intuitive search experiences.

Google is also tackling the challenge of content authenticity in the age of AI with the introduction of SynthID Detector, a verification portal designed to identify content created using Google's AI tools. This tool aims to provide transparency in the rapidly evolving media landscape by allowing users to upload media files and scan them for SynthID watermarks. If detected, the portal highlights the portions of the content most likely to be watermarked, helping to distinguish between AI-generated and original content. This initiative builds upon Google's earlier work with SynthID, which embeds imperceptible watermarks into AI-generated content to minimize misinformation and misattribution.

Beyond search and content verification, Google is expanding its AI integration into new areas, showcasing Android XR glasses powered by Gemini. This development highlights Google's vision for the future of augmented reality and the potential of AI to enhance user experiences in wearable technology. The company recently unveiled updates to the Gemini app, including access to Imagen 4 and Veo 3. Veo 3 is an AI video model that is considered one of the best. These advances underscore Google's commitment to remaining at the forefront of AI innovation and its ambition to seamlessly integrate AI across its ecosystem.

Recommended read:
References :

Eric Hal@techradar.com //
Google I/O 2025 saw the unveiling of 'AI Mode' for Google Search, signaling a significant shift in how the company approaches information retrieval and user experience. The new AI Mode, powered by the Gemini 2.5 model, is designed to offer more detailed results, personal context, and intelligent assistance. This upgrade aims to compete directly with the capabilities of AI chatbots like ChatGPT, providing users with a more conversational and comprehensive search experience. The rollout has commenced in the U.S. for both the browser version of Search and the Google app, although availability in other countries remains unconfirmed.

AI Mode brings several key features to the forefront, including Deep Search, Live Visual Search, and AI-powered agents. Deep Search allows users to delve into topics with unprecedented depth, running hundreds of searches simultaneously to generate expert-level, fully-cited reports in minutes. With Search Live, users can leverage their phone's camera to interact with Search in real-time, receiving context-aware responses from Gemini. Google is also bringing agentic capabilities to Search, allowing users to perform tasks like booking tickets and making reservations directly through the AI interface.

Google’s revamp of its AI search service appears to be a response to the growing popularity of AI-driven search experiences offered by companies like OpenAI and Perplexity. According to Gartner analyst Chirag Dekate, evidence suggests a greater reliance on search and AI-infused search experiences. As AI Mode rolls out, Google is encouraging website owners to optimize their content for AI-powered search by creating unique, non-commodity content and ensuring that their sites meet technical requirements and provide a good user experience.

Recommended read:
References :
  • Search Engine Journal: Google's new AI Mode in Search, integrating Gemini 2.5, aims to enhance user interaction by providing more conversational and comprehensive responses.
  • www.techradar.com: Google just got a new 'Deep Think' mode – and 6 other upgrades
  • WhatIs: Google expands Gemini model, Search as AI rivals encroach
  • www.tomsguide.com: Google Search gets an AI tab — here’s what it means for your searches
  • AI News | VentureBeat: Inside Google’s AI leap: Gemini 2.5 thinks deeper, speaks smarter and codes faster
  • Search Engine Journal: Google Gemini upgrades include Chrome integration, Live visual tools, and enhanced 2.5 models. Learn how these AI advances could reshape your marketing strategy.
  • Google DeepMind Blog: Gemini 2.5: Our most intelligent models are getting even better
  • learn.aisingapore.org: Updates to Gemini 2.5 from Google DeepMind
  • THE DECODER: Google upgrades Gemini 2.5 Pro with a new Deep Think mode for advanced reasoning abilities
  • www.techradar.com: I've been using Google's new AI mode for Search – here's how to master it
  • www.theguardian.com: Search engine revamp and Gemini 2.5 introduced at conference in latest showing tech giant is all in on AI on Tuesday unleashed another wave of technology to accelerate a year-long makeover of its search engine that is changing the way people get information and curtailing the flow of internet traffic to other websites.
  • LearnAI: Updates to Gemini 2.5 from Google DeepMind
  • www.analyticsvidhya.com: Google I/O 2025: AI Mode on Google Search, Veo 3, Imagen 4, Flow, Gemini Live, and More
  • techvro.com: Google AI Mode Promises Deep Search and Goes Beyond AI Overviews
  • THE DECODER: Google pushes AI-powered search with agents, multimodality, and virtual shopping
  • felloai.com: Google I/O 2025 Recap With All The Jaw-Dropping AI Announcements
  • Analytics Vidhya: Google I/O 2025: AI Mode on Google Search, Veo 3, Imagen 4, Flow, Gemini Live, and More
  • LearnAI: Gemini as a universal AI assistant
  • Fello AI: Google I/O 2025 Recap With All The Jaw-Dropping AI Announcements
  • AI & Machine Learning: Today at Google I/O, we're expanding that help enterprises build more sophisticated and secure AI-driven applications and agents
  • www.techradar.com: Google Gemini 2.5 Flash promises to be your favorite AI chatbot, but how does it compare to ChatGPT 4o?
  • www.laptopmag.com: From $250 AI subscriptions to futuristic glasses and search that talks back, here’s what people are saying about Tuesday's Google I/O.
  • www.tomsguide.com: Google’s Gemini AI can now access Gmail, Docs, Drive, and more to deliver personalized help — but it raises new privacy concerns.
  • Data Phoenix: Google updated its model lineup and introduced a 'Deep Think' reasoning mode for Gemini 2.5 Pro
  • Maginative: Google’s revamped Canvas, powered by the Gemini 2.5 Pro model, lets you turn ideas into apps, quizzes, podcasts, and visuals in seconds—no code required.
  • Tech News | Euronews RSS: The tech giant is introducing a new "AI mode" that will embed chatbot capabilities into its search engine to keep up with rivals like OpenAI's ChatGPT.
  • learn.aisingapore.org: Advancing Gemini’s security safeguards – Google DeepMind
  • Data Phoenix: Google has launched major Gemini updates, including free visual assistance via Gemini Live, new subscription tiers starting at $19.99/month, advanced creative tools like Veo 3 for video generation with native audio, and an upcoming autonomous Agent Mode for complex task management.
  • www.zdnet.com: Everything from Google I/O 2025 you might've missed: Gemini, smart glasses, and more
  • thetechbasic.com: Google now adds ads to AI Mode and AI Overviews in search
  • Google DeepMind Blog: Gemini 2.5: Our most intelligent models are getting even better

@the-decoder.com //
Google is integrating its Gemini AI model deeper into its search engine with the introduction of 'AI Mode'. This new feature, currently in a limited testing phase in the US, aims to transform the search experience into a conversational one. Instead of the traditional list of links, AI Mode delivers answers generated directly from Google’s index, functioning much like a Gemini-powered chatbot. The search giant is also dropping the Labs waitlist, allowing any U.S. user who opts in to try the new search function.

The AI Mode includes visual place and product cards, enhanced multimedia features, and a left-side panel for managing past searches. This provides more organized results for destinations, products, and services. Users can ask contextual follow-up questions, and the AI Mode will populate a sidebar with cards referring to the sources it's using to formulate its answers. It can also access Google's Shopping Graph and localized data from Maps.

This move is seen as Google's direct response to AI-native upstarts that are recasting the search bar as a natural-language front end to the internet. Google CEO Sundar Pichai is hopeful to have an agreement with Apple to have Gemini as an option as part of Apple Intelligence by middle of this year. The rise of AI in search raises concerns for marketers. Organic SEO tactics built on blue links will erode and there will be a need to prepare content for zero‑click, AI‑generated summaries.

Recommended read:
References :
  • shellypalmer.com: Google’s AI Mode: The Chatbot Comes to Search
  • Android Faithful: A Simple Google Search Is Now a Thing of the Past
  • The Tech Portal: Google is now reportedly preparing to expand access to its Gemini AI chatbot, including Gemini for children under 13, in its search engine.
  • www.computerworld.com: Google is making changes to its venerable search interface so users can more naturally interact with its AI features.
  • www.socialmediatoday.com: Google's giving more people access to its new "AI Mode" in Search.
  • Shelly Palmer: Google's AI Mode: The Chatbot Comes to Search

@developers.googleblog.com //
Google is aggressively advancing AI agent interoperability with its new Agent2Agent (A2A) protocol and development kit. Unveiled at Google Cloud Next '25, the A2A protocol aims to standardize how AI agents communicate, collaborate, and discover each other across different platforms and tasks. This initiative is designed to streamline the exchange of tasks, streaming updates, and sharing of artifacts, fostering a more connected and efficient AI ecosystem. The A2A protocol complements existing efforts by providing a common language for agents, enabling them to seamlessly integrate and normalize various frameworks like LangChain, AutoGen, and Pydantic.

The Agent2Agent protocol introduces the concept of an "Agent Card" (agent.json), which describes an agent's capabilities and how to reach it. Agents communicate through structured messages, indicating task states such as working, input-required, or completed. By establishing this open standard, Google, along with partners like SAP, seeks to enable agents from different vendors to interact, share context, and collaborate effectively. This move represents a significant step beyond simple API integrations, laying the groundwork for interoperability and automation across traditionally disconnected systems.

The development of A2A aligns with Google's broader strategy to solidify its position in the competitive AI landscape, challenging rivals like Microsoft and Amazon. Google is not only introducing new AI chips, such as the Ironwood TPUs, but also updating its Vertex AI platform with Gemini 2.5 models and releasing an agent development kit. This comprehensive approach aims to empower businesses to turn AI potential into real-world impact by facilitating open agent collaboration, model choice, and multimodal intelligence. The collaboration with SAP to enable AI agents to securely interact and collaborate across platforms through A2A exemplifies this commitment to enterprise-ready AI that is open, flexible, and deeply grounded in business context.

Recommended read:
References :
  • Search Engine Land: Google AI Mode lets you ask questions with images
  • Search Engine Journal: Google AI mode now understands images, allowing you to upload photos and ask questions about them. AI Mode is rolling out to more people.
  • The Verge: Google is adding multimodal capabilities to its search-centric AI Mode chatbot that enable it to “see†and answer questions about images, as it expands access to AI Mode to “millions more†users.
  • Glenn Gabe: AI Mode expands with multimodal functionality and it's rolling out to millions of more users -> Google AI Mode lets you ask questions with images “With AI Mode’s new multimodal understanding, you can snap a photo or upload an image, ask a question about it and get a rich, comprehensive response with links to dive deeper,†Robby Stein, VP of Product, Google Search wrote."
  • PCMag Middle East ai: Google is also adding AI Mode to the Lens feature of its Google app for Android and iOS. Google is opening up , the web-search chatbot it , to 'millions more Labs users in the US.'
  • www.searchenginejournal.com: Google AI mode now understands images, allowing you to upload photos and ask questions about them. AI Mode is rolling out to more people. The post appeared first on .
  • www.tomsguide.com: Google's Search just got a whole lot more intuitive with the integration of Google Lens in AI Mode.
  • www.zdnet.com: Google Search just got an AI upgrade that you might actually find useful - and it's free
  • www.searchenginejournal.com: Google Maps content moderation now uses Gemini to detect fake reviews and suspicious profile edits.
  • SAP News Center: How SAP and Google Cloud Are Advancing Enterprise AI Through Open Agent Collaboration, Model Choice, and Multimodal Intelligence
  • Ken Yeung: Google Pushes Agent Interoperability With New Dev Kit and Agent2Agent Standard
  • Thomas Roccia :verified:: Google just dropped A2A, a new protocol for agents to talk to each other.
  • AI & Machine Learning: Delivering an application-centric, AI-powered cloud for developers and operators
  • AI News | VentureBeat: Google’s Agent2Agent interoperability protocol aims to standardize agentic communication
  • www.marktechpost.com: Google Introduces Agent2Agent (A2A): A New Open Protocol that Allows AI Agents Securely Collaborate Across Ecosystems Regardless of Framework or Vendor
  • Maginative: Google just Launched Agent2Agent, an Open Protocol for AI agents to Work Directly with Each Other
  • Analytics Vidhya: In today’s fast moving world, many businesses use AI agents to handle their tasks autonomously. However, these agents often operate in isolation, unable to communicate across different systems or vendors.
  • www.analyticsvidhya.com: Agent-to-Agent Protocol: Helping AI Agents Work Together Across Systems
  • developers.googleblog.com: Google's A2A Protocol for Seamless AI Agent Communication
  • TestingCatalog: Google's new Agent2Agent (A2A) protocol enables seamless AI agent collaboration across diverse frameworks, enhancing enterprise productivity and automating complex workflows.
  • bdtechtalks.com: Google's new A2A framework lets different AI agents chat and work together seamlessly, breaking down silos and improving productivity across platforms. The post first appeared on .
  • TheSequence: Google just pushed the boundaries of multi agent communications