News from the AI & ML world

DeeperML - #aimode

@www.searchenginejournal.com //
References: Search Engine Journal , WhatIs ,
Google is aggressively expanding its artificial intelligence capabilities across its platforms, integrating the Gemini AI model into Search, and Android XR smart glasses. The tech giant unveiled the rollout of "AI Mode" in the U.S. Search, making it accessible to all users after initial testing in the Labs division. This move signifies a major shift in how people interact with the search engine, offering a conversational experience akin to consulting with an expert.

Google is feeding its latest AI model, Gemini 2.5, into its search algorithms, enhancing features like "AI Overviews" which are now available in over 200 countries and 40 languages and are used by 1.5 billion monthly users. In addition, Gemini 2.5 Pro introduces enhanced reasoning, through Deep Think, to give deeper and more thorough responses with AI Mode with Deep Search. Google is also testing new AI-powered features, including the ability to conduct searches through live video feeds with Search Live.

Google is also re-entering the smart glasses market with Android XR-powered spectacles featuring a hands-free camera and a voice-powered AI assistant. This project, named Astra, allows users to talk back and forth with Search about what they see in real-time with their cameras. These advancements aim to create more personalized and efficient user experiences, marking a new phase in the AI platform shift and solidifying AI's position in search.

Recommended read:
References :
  • Search Engine Journal: Google Expands AI Features in Search: What You Need to Know
  • WhatIs: Google expands Gemini model, Search as AI rivals encroach
  • www.theguardian.com: Google unveils ‘AI Mode’ in the next phase of its journey to change search

Eric Hal@techradar.com //
Google I/O 2025 saw the unveiling of 'AI Mode' for Google Search, signaling a significant shift in how the company approaches information retrieval and user experience. The new AI Mode, powered by the Gemini 2.5 model, is designed to offer more detailed results, personal context, and intelligent assistance. This upgrade aims to compete directly with the capabilities of AI chatbots like ChatGPT, providing users with a more conversational and comprehensive search experience. The rollout has commenced in the U.S. for both the browser version of Search and the Google app, although availability in other countries remains unconfirmed.

AI Mode brings several key features to the forefront, including Deep Search, Live Visual Search, and AI-powered agents. Deep Search allows users to delve into topics with unprecedented depth, running hundreds of searches simultaneously to generate expert-level, fully-cited reports in minutes. With Search Live, users can leverage their phone's camera to interact with Search in real-time, receiving context-aware responses from Gemini. Google is also bringing agentic capabilities to Search, allowing users to perform tasks like booking tickets and making reservations directly through the AI interface.

Google’s revamp of its AI search service appears to be a response to the growing popularity of AI-driven search experiences offered by companies like OpenAI and Perplexity. According to Gartner analyst Chirag Dekate, evidence suggests a greater reliance on search and AI-infused search experiences. As AI Mode rolls out, Google is encouraging website owners to optimize their content for AI-powered search by creating unique, non-commodity content and ensuring that their sites meet technical requirements and provide a good user experience.

Recommended read:
References :
  • Search Engine Journal: Google's new AI Mode in Search, integrating Gemini 2.5, aims to enhance user interaction by providing more conversational and comprehensive responses.
  • www.techradar.com: Google just got a new 'Deep Think' mode – and 6 other upgrades
  • WhatIs: Google expands Gemini model, Search as AI rivals encroach
  • www.tomsguide.com: Google Search gets an AI tab — here’s what it means for your searches
  • AI News | VentureBeat: Inside Google’s AI leap: Gemini 2.5 thinks deeper, speaks smarter and codes faster
  • Search Engine Journal: Google Gemini upgrades include Chrome integration, Live visual tools, and enhanced 2.5 models. Learn how these AI advances could reshape your marketing strategy.
  • Google DeepMind Blog: Gemini 2.5: Our most intelligent models are getting even better
  • learn.aisingapore.org: Updates to Gemini 2.5 from Google DeepMind
  • THE DECODER: Google upgrades Gemini 2.5 Pro with a new Deep Think mode for advanced reasoning abilities
  • www.techradar.com: I've been using Google's new AI mode for Search – here's how to master it
  • www.theguardian.com: Search engine revamp and Gemini 2.5 introduced at conference in latest showing tech giant is all in on AI on Tuesday unleashed another wave of technology to accelerate a year-long makeover of its search engine that is changing the way people get information and curtailing the flow of internet traffic to other websites.
  • LearnAI: Updates to Gemini 2.5 from Google DeepMind
  • www.analyticsvidhya.com: Google I/O 2025: AI Mode on Google Search, Veo 3, Imagen 4, Flow, Gemini Live, and More
  • techvro.com: Google AI Mode Promises Deep Search and Goes Beyond AI Overviews
  • THE DECODER: Google pushes AI-powered search with agents, multimodality, and virtual shopping
  • felloai.com: Google I/O 2025 Recap With All The Jaw-Dropping AI Announcements
  • Analytics Vidhya: Google I/O 2025: AI Mode on Google Search, Veo 3, Imagen 4, Flow, Gemini Live, and More
  • LearnAI: Gemini as a universal AI assistant
  • Fello AI: Google I/O 2025 Recap With All The Jaw-Dropping AI Announcements
  • AI & Machine Learning: Today at Google I/O, we're expanding that help enterprises build more sophisticated and secure AI-driven applications and agents
  • www.techradar.com: Google Gemini 2.5 Flash promises to be your favorite AI chatbot, but how does it compare to ChatGPT 4o?
  • www.laptopmag.com: From $250 AI subscriptions to futuristic glasses and search that talks back, here’s what people are saying about Tuesday's Google I/O.
  • www.tomsguide.com: Google’s Gemini AI can now access Gmail, Docs, Drive, and more to deliver personalized help — but it raises new privacy concerns.
  • Data Phoenix: Google updated its model lineup and introduced a 'Deep Think' reasoning mode for Gemini 2.5 Pro
  • Maginative: Google’s revamped Canvas, powered by the Gemini 2.5 Pro model, lets you turn ideas into apps, quizzes, podcasts, and visuals in seconds—no code required.
  • Tech News | Euronews RSS: The tech giant is introducing a new "AI mode" that will embed chatbot capabilities into its search engine to keep up with rivals like OpenAI's ChatGPT.
  • learn.aisingapore.org: Advancing Gemini’s security safeguards – Google DeepMind
  • Data Phoenix: Google has launched major Gemini updates, including free visual assistance via Gemini Live, new subscription tiers starting at $19.99/month, advanced creative tools like Veo 3 for video generation with native audio, and an upcoming autonomous Agent Mode for complex task management.
  • www.zdnet.com: Everything from Google I/O 2025 you might've missed: Gemini, smart glasses, and more
  • thetechbasic.com: Google now adds ads to AI Mode and AI Overviews in search
  • Google DeepMind Blog: Gemini 2.5: Our most intelligent models are getting even better

erichs211@gmail.com (Eric@techradar.com //
Google's powerful AI model, Gemini 2.5 Pro, has achieved a significant milestone by completing the classic Game Boy game Pokémon Blue. This accomplishment, spearheaded by software engineer Joel Z, demonstrates the AI's enhanced reasoning and problem-solving abilities. Google CEO Sundar Pichai celebrated the achievement online, highlighting it as a substantial win for AI development. The project showcases how AI can learn to handle complex tasks, requiring long-term planning, goal tracking, and visual navigation, which are vital components in the pursuit of general artificial intelligence.

Joel Z facilitated Gemini's gameplay over several months, livestreaming the AI's progress. While Joel is not affiliated with Google, his efforts were supported by the company's leadership. To enable Gemini to navigate the game, Joel used an emulator, mGBA, to feed screenshots and game data, like character position and map layout. He also incorporated smaller AI helpers, like a "Pathfinder" and a "Boulder Puzzle Solver," to tackle particularly challenging segments. These sub-agents, also versions of Gemini, were deployed strategically by the AI to manage complex situations, showcasing its ability to differentiate between routine and complicated tasks.

Google is also experimenting with transforming its search engine into a Gemini-powered chatbot via an AI Mode. This new feature, currently being tested with a small percentage of U.S. users, delivers conversational answers generated from Google's vast index, effectively turning Search into an answer engine. Instead of a list of links, AI Mode provides rich, visual summaries and remembers prior queries, directly competing with the search features of Perplexity and ChatGPT. While this shift could potentially impact organic SEO tactics, it signifies Google's commitment to integrating AI more deeply into its core products, offering users a more intuitive and informative search experience.

Recommended read:
References :
  • the-decoder.com: Google's reasoning LLM Gemini 2.5 Pro beats Pokémon Blue with a little help
  • thetechbasic.com: Google’s powerful AI model, Gemini 2.5 Pro, has finished playing the old Game Boy game Pokémon Blue.
  • www.techradar.com: Google's Gemini AI Is now a Pokémon Master
  • THE DECODER: Google's reasoning LLM Gemini 2.5 Pro beats Pokémon Blue with a little help
  • The Tech Basic: Google Gemini AI Beats Pokémon Blue With Help and Updates

Krishna Chytanya@AI & Machine Learning //
Google is significantly enhancing its AI capabilities across its Gemini platform and various products, focusing on multilingual support and AI-assisted features. To address the needs of global users, Google has introduced the Model Context Protocol (MCP) which enables the creation of chatbots capable of supporting multiple languages. This system uses Gemini, Gemma, and Translation LLM to provide quick and accurate answers in different languages. MCP acts as a standardized way for AI systems to interact with external data sources and tools, allowing AI agents to access information and execute actions outside their own models.

Google Gemini is also receiving AI-powered image editing features within its chat interface. Users can now tweak backgrounds, swap out objects, and make other adjustments to both AI-generated and personal photos, with support for over 45 languages in most countries. The editing tools are being rolled out gradually for users on web and mobile devices. Additionally, Google is expanding access to its AI tools by releasing a standalone app for NotebookLM, one of its best AI tools. This will make it easier for users to delve into notes on complex topics directly from their smartphones.

In a move toward monetization within the AI space, Google is testing AdSense ads inside AI chatbot conversations. The company has expanded its AdSense for Search platform to support chatbots from startups and its own Gemini tools. This reflects a shift in how people find information online as AI services increasingly provide direct answers, potentially reducing the need to visit traditional websites. Furthermore, Google is extending Gemini's reach to younger users by rolling out a version for children under 13 with parent-managed Google accounts through Family Link, ensuring safety and privacy measures are in place.

Recommended read:
References :
  • the-decoder.com: Google is rolling out new AI-powered image editing features in its Gemini app, letting users tweak backgrounds, swap out objects.
  • www.eweek.com: Google is now placing ads in some third-party AI chatbot conversations, signaling a shift in how it monetizes search amid rising competition from ChatGPT.
  • www.tomsguide.com: Another new feature for users of Gemini to get their teeth into
  • PCMag Middle East ai: Currently limited to the web, NotebookLM gives you an AI-powered workspace to pull together multiple documents in one place.
  • www.techradar.com: The promised NotebookLM apps are showing up on the Play Store and App Store, with pre-orders now open.
  • www.tomsguide.com: One of Google's best AI tools is getting a standalone app — what you need to know
  • the-decoder.com: Google is now placing AdSense ads inside AI chatbot conversations
  • TestingCatalog: Google prepares new Gemini AI subscription tiers with possible Gemini Ultra plan
  • AI & Machine Learning: Create chatbots that speak different languages with Gemini, Gemma, Translation LLM, and Model Context Protocol
  • Mark Gurman: NEW: Google CEO Sundar Pichai said in court he is hopeful to have an agreement with Apple to have Gemini as an option as part of Apple Intelligence by middle of this year. This is referring to the Siri/Writing Tools integration ChatGPT has.
  • THE DECODER: Google is now placing AdSense ads inside AI chatbot conversations
  • www.techradar.com: You can put Google Gemini right on your smartphone home screen – here’s how
  • THE DECODER: Google is rolling out new AI-powered image editing features in its Gemini app, letting users tweak backgrounds, swap out objects.
  • Mark Gurman: NEW: Google CEO Sundar Pichai said in court he is hopeful to have an agreement with Apple to have Gemini as an option as part of Apple Intelligence by middle of this year. This is referring to the Siri/Writing Tools integration ChatGPT has.
  • www.zdnet.com: Google's best AI research tool is getting its own app - preorder it now
  • shellypalmer.com: Shelly Palmer discusses Google's AI Mode, which integrates a chatbot into search.
  • Shelly Palmer: Details Google's AI Mode integration in Search, effectively turning it into a Gemini chatbot.
  • THE DECODER: Google upgrades Gemini 2.5 Pro for coding and app development
  • the-decoder.com: The latest pre-release version of Google's Gemini 2.5 Pro language model brings major improvements for front-end development and complex programming tasks.
  • www.zdnet.com: Google's Gemini 2.5 Pro update makes the AI model even better at coding
  • The Official Google Blog: Build rich, interactive web apps with an updated Gemini 2.5 Pro
  • BetaNews: After what feels like an eternity, Google has finally brought a native Gemini app to the iPad.
  • chromeunboxed.com: Google’s Gemini has proven to be quite versatile and adept at tasks ranging from answering complex queries to assisting with coding. Its capabilities are further amplified through the use of Extensions – recently rebranded as Apps – which allow Gemini to interact directly with other applications and services to accomplish real-world tasks.
  • iDownloadBlog.com: The Google Gemini app has been given an iPad-optimized user interface, while also gaining Home Screen widget support.
  • AI & Machine Learning: Have you ever had something on the tip of your tongue, but you weren’t exactly sure how to describe what’s in your mind?  For developers, this is where "vibe coding " comes in.
  • MarkTechPost: Google Launches Gemini 2.5 Pro I/O: Outperforms GPT-4 in Coding, Supports Native Video Understanding and Leads WebDev Arena
  • TestingCatalog: Google debuts Gemini 2.5 Pro I/O Edition with major upgrades for web development
  • www.tomsguide.com: Google just unveiled a major update to Gemini AI ahead of I/O — here's what it can do
  • The Tech Portal: Google rolls out dedicated Gemini app for iPad with enhanced features
  • www.windowscentral.com: DeepMind CEO calls Google's updated Gemini 2.5 Pro AI "the best coding model" with a taste for aesthetic web development

@developers.googleblog.com //
Google is aggressively advancing AI agent interoperability with its new Agent2Agent (A2A) protocol and development kit. Unveiled at Google Cloud Next '25, the A2A protocol aims to standardize how AI agents communicate, collaborate, and discover each other across different platforms and tasks. This initiative is designed to streamline the exchange of tasks, streaming updates, and sharing of artifacts, fostering a more connected and efficient AI ecosystem. The A2A protocol complements existing efforts by providing a common language for agents, enabling them to seamlessly integrate and normalize various frameworks like LangChain, AutoGen, and Pydantic.

The Agent2Agent protocol introduces the concept of an "Agent Card" (agent.json), which describes an agent's capabilities and how to reach it. Agents communicate through structured messages, indicating task states such as working, input-required, or completed. By establishing this open standard, Google, along with partners like SAP, seeks to enable agents from different vendors to interact, share context, and collaborate effectively. This move represents a significant step beyond simple API integrations, laying the groundwork for interoperability and automation across traditionally disconnected systems.

The development of A2A aligns with Google's broader strategy to solidify its position in the competitive AI landscape, challenging rivals like Microsoft and Amazon. Google is not only introducing new AI chips, such as the Ironwood TPUs, but also updating its Vertex AI platform with Gemini 2.5 models and releasing an agent development kit. This comprehensive approach aims to empower businesses to turn AI potential into real-world impact by facilitating open agent collaboration, model choice, and multimodal intelligence. The collaboration with SAP to enable AI agents to securely interact and collaborate across platforms through A2A exemplifies this commitment to enterprise-ready AI that is open, flexible, and deeply grounded in business context.

Recommended read:
References :
  • Search Engine Land: Google AI Mode lets you ask questions with images
  • Search Engine Journal: Google AI mode now understands images, allowing you to upload photos and ask questions about them. AI Mode is rolling out to more people.
  • The Verge: Google is adding multimodal capabilities to its search-centric AI Mode chatbot that enable it to “see†and answer questions about images, as it expands access to AI Mode to “millions more†users.
  • Glenn Gabe: AI Mode expands with multimodal functionality and it's rolling out to millions of more users -> Google AI Mode lets you ask questions with images “With AI Mode’s new multimodal understanding, you can snap a photo or upload an image, ask a question about it and get a rich, comprehensive response with links to dive deeper,†Robby Stein, VP of Product, Google Search wrote."
  • PCMag Middle East ai: Google is also adding AI Mode to the Lens feature of its Google app for Android and iOS. Google is opening up , the web-search chatbot it , to 'millions more Labs users in the US.'
  • www.searchenginejournal.com: Google AI mode now understands images, allowing you to upload photos and ask questions about them. AI Mode is rolling out to more people. The post appeared first on .
  • www.tomsguide.com: Google's Search just got a whole lot more intuitive with the integration of Google Lens in AI Mode.
  • www.zdnet.com: Google Search just got an AI upgrade that you might actually find useful - and it's free
  • www.searchenginejournal.com: Google Maps content moderation now uses Gemini to detect fake reviews and suspicious profile edits.
  • SAP News Center: How SAP and Google Cloud Are Advancing Enterprise AI Through Open Agent Collaboration, Model Choice, and Multimodal Intelligence
  • Ken Yeung: Google Pushes Agent Interoperability With New Dev Kit and Agent2Agent Standard
  • Thomas Roccia :verified:: Google just dropped A2A, a new protocol for agents to talk to each other.
  • AI & Machine Learning: Delivering an application-centric, AI-powered cloud for developers and operators
  • AI News | VentureBeat: Google’s Agent2Agent interoperability protocol aims to standardize agentic communication
  • www.marktechpost.com: Google Introduces Agent2Agent (A2A): A New Open Protocol that Allows AI Agents Securely Collaborate Across Ecosystems Regardless of Framework or Vendor
  • Maginative: Google just Launched Agent2Agent, an Open Protocol for AI agents to Work Directly with Each Other
  • Analytics Vidhya: In today’s fast moving world, many businesses use AI agents to handle their tasks autonomously. However, these agents often operate in isolation, unable to communicate across different systems or vendors.
  • www.analyticsvidhya.com: Agent-to-Agent Protocol: Helping AI Agents Work Together Across Systems
  • developers.googleblog.com: Google's A2A Protocol for Seamless AI Agent Communication
  • TestingCatalog: Google's new Agent2Agent (A2A) protocol enables seamless AI agent collaboration across diverse frameworks, enhancing enterprise productivity and automating complex workflows.
  • bdtechtalks.com: Google's new A2A framework lets different AI agents chat and work together seamlessly, breaking down silos and improving productivity across platforms. The post first appeared on .
  • TheSequence: Google just pushed the boundaries of multi agent communications