News from the AI & ML world

DeeperML - #googlecloudai

Derek Egan@AI & Machine Learning //
Google Cloud is enhancing its MCP Toolbox for Databases to provide simpler and more secure access to enterprise data for AI agents. Announced at Google Cloud Next 2025, this update includes support for Model Context Protocol (MCP), an emerging open standard developed by Anthropic, which aims to standardize how AI systems connect to various data sources. The MCP Toolbox for Databases, formerly known as the Gen AI Toolbox for Databases, acts as an open-source MCP server, allowing developers to connect GenAI agents to enterprise databases like AlloyDB for PostgreSQL, Spanner, and Cloud SQL securely and efficiently.

The enhanced MCP Toolbox for Databases reduces boilerplate code, improves security through OAuth2 and OIDC, and offers end-to-end observability via OpenTelemetry integration. These features simplify the development process, allowing developers to build agents with the Agent Development Kit (ADK). The ADK, an open-source framework, supports the full lifecycle of intelligent agent development, from prototyping and evaluation to production deployment. ADK provides deterministic guardrails, bidirectional audio and video streaming capabilities, and a direct path to production deployment via Vertex AI Agent Engine.

This update represents a significant step forward in creating secure and standardized methods for AI agents to communicate with one another and access enterprise data. Because the Toolbox is fully open-source, it includes contributions from third-party databases such as Neo4j and Dgraph. By supporting MCP, the Toolbox enables developers to leverage a single, standardized protocol to query a wide range of databases, enhancing interoperability and streamlining the development of agentic applications. New customers can also leverage Google Cloud's offer of $300 in free credit to begin building and testing their AI solutions.

Recommended read:
References :
  • cloud.google.com: Announcement of Gen AI Toolbox for Databases.
  • AI & Machine Learning: Google Cloud Blog post about MCP Toolbox for Databases
  • github.com: Google Gen AI Toolbox GitHub repository.
  • TheSequence: The Sequence Engineering #528: Inside Google's New Agent Development Kit
  • Analytics Vidhya: Looking to build intelligent agents with real-world capabilities? Use Google ADK for building agents that can reason, delegate, and respond dynamically.
  • www.github.com: GitHub repository for the Agent Development Kit (ADK) in Python.

@www.bigdatawire.com //
References: BigDATAwire , Maginative
Google Cloud Next 2025 featured a series of announcements focused on enhancing data analytics capabilities within Google Cloud, particularly through advancements to BigQuery. These enhancements center around a vision for AI-native data analytics, aiming to make data work more conversational, contextual, and intelligent. Key innovations include the introduction of unified governance in BigQuery, AI-powered agents for data engineering and data science tasks, and the integration of Gemini, Google's flagship foundation model, to drive these intelligent capabilities. These developments are designed to simplify data management, improve data quality, and accelerate the generation of AI-driven insights for businesses.

The new intelligent unified governance in BigQuery is designed to help organizations discover, understand, and leverage their data assets more effectively. This includes a universal, AI-powered data catalog that natively integrates Dataplex, BigQuery sharing, security, and metastore capabilities. The unified governance brings together business, technical, and runtime metadata, providing end-to-end data-to-AI lineage, data profiling, insights, and secure sharing. A universal semantic search allows users to find the right data by asking questions in natural language. These advancements promise to transform governance from a burden into a powerful tool for data activation, simplifying data and AI management.

A significant aspect of these enhancements is the introduction of specialized AI agents within BigQuery and Looker. These agents are tailored to different user roles, such as data engineers and business analysts, assisting with tasks like building data pipelines, model development, and querying data in plain English. Powered by Gemini, these agents provide suggestions based on information collected through the new BigQuery Knowledge Engine, which understands schema relationships, business terms, and query history. These agents are designed to make more data available to more people without them doing more work, ultimately transforming how data professionals interact with data.

Recommended read:
References :
  • BigDATAwire: Google Cloud Cranks Up the Analytics at Next 2025
  • Maginative: Google’s Vision for AI-Native Data Analytics Comes Into Focus

Ken Yeung@Ken Yeung //
References: TheSequence , TestingCatalog ,
Google has launched the Agent2Agent (A2A) protocol, a groundbreaking open interoperability framework designed to facilitate seamless collaboration between AI agents across different platforms and vendors. This initiative addresses the challenges posed by siloed AI systems by establishing a standardized method for agents to communicate, coordinate actions, and securely exchange information. A2A aims to streamline complex workflows, improve productivity, and foster a dynamic ecosystem where AI agents operate as composable primitives in enterprise-scale operations.

The A2A protocol is built upon key principles, including capability discovery, task management, collaboration, and user experience negotiation. Agents can publish their capabilities using JSON-formatted "Agent Cards," which allows client agents to identify the most appropriate remote agent for a given task. The protocol supports the complete lifecycle management of tasks, enabling real-time synchronization between agents. By leveraging established standards like HTTP and JSON-RPC, A2A ensures compatibility with existing infrastructure while prioritizing security with built-in authentication and authorization mechanisms. The protocol is also modality-agnostic, accommodating text, audio, video, and embedded UI components.

Google envisions A2A as a foundational layer for future AI systems, promoting collaboration and interoperability across various environments. The company has released A2A as open source, inviting the broader community to contribute to its refinement and expansion. This approach aligns with Google's strategy of fostering innovation in AI, ensuring trustworthiness, and promoting scalability. Industry experts believe A2A represents a significant step toward realizing the full potential of multi-agent ecosystems, particularly as enterprises increasingly adopt AI agents for diverse tasks, from customer service to supply chain management.

Recommended read:
References :
  • TheSequence: The Sequence Radar #531: A2A is the New Hot Protocol in Agent Land
  • TestingCatalog: Google launches Agent2Agent protocol to connect AI agents across platforms
  • www.aiwire.net: Google Cloud Preps for Agentic AI Era with ‘Ironwood’ TPU, New Models and Software

Ken Yeung@Ken Yeung //
Google has launched Agent2Agent (A2A), an open interoperability protocol designed to facilitate seamless collaboration between AI agents across diverse frameworks and vendors. This initiative, spearheaded by Google Cloud, addresses the challenge of siloed AI systems by standardizing communication, ultimately automating complex workflows and boosting enterprise productivity. The A2A protocol has garnered support from over 50 technology partners, including industry giants like Salesforce, SAP, ServiceNow, and MongoDB, signaling a broad industry interest in fostering a more cohesive AI ecosystem. A2A aims to provide a universal framework, allowing AI agents to securely exchange information, coordinate actions, and integrate across various enterprise platforms, regardless of their underlying framework or vendor.

The A2A protocol functions on core principles of capability discovery, task management, collaboration, and user experience negotiation. Agents can advertise their capabilities using JSON-formatted "Agent Cards," enabling client agents to identify the most suitable remote agent for a specific task. It facilitates lifecycle management for tasks, ensuring real-time synchronization. Built on established standards like HTTP and JSON, A2A ensures compatibility with existing systems while prioritizing security. Google has released A2A as open source, encouraging community contributions to enhance its functionality.

Enterprises are beginning to use multi-agent systems, with multiple AI agents working together, even when built on different frameworks or providers. By enabling interoperability between specialized agents, A2A addresses a critical barrier to scaling agentic AI solutions, unifying workflows and reducing integration costs. Vertex AI is enhancing its services to help move towards a multi-agent ecosystem, including Agent Development Kit, Agent2Agent Protocol, Agent Garden, and Agent Engine. The protocol aims to foster innovation in AI while ensuring trustworthiness and scalability as enterprises adopt AI agents for various tasks.

Recommended read:
References :
  • AI & Machine Learning: Vertex AI offers new ways to build and manage multi-agent systems
  • techstrong.ai: Google Unfurls Raft of AI Agent Technologies at Google Cloud Next ’25
  • Ken Yeung: Google Pushes Agent Interoperability With New Dev Kit and Agent2Agent Standard
  • Analytics Vidhya: Agent-to-Agent Protocol: Helping AI Agents Work Together Across Systems
  • TestingCatalog: Google's new Agent2Agent (A2A) protocol enables seamless AI agent collaboration across diverse frameworks, enhancing enterprise productivity and automating complex workflows.
  • TheSequence: The Sequence Radar #531: A2A is the New Hot Protocol in Agent Land
  • bdtechtalks.com: How Google’s Agent2Agent can boost AI productivity through inter-agent communication
  • www.aiwire.net: Google Cloud Preps for Agentic AI Era with ‘Ironwood’ TPU, New Models and Software
  • medium.com: Security Analysis: Potential AI Agent Hijacking via MCP and A2A Protocol Insights

@www.analyticsvidhya.com //
Google Cloud Next '25 saw a major push into agentic AI with the unveiling of several key technologies and initiatives aimed at fostering the development and interoperability of AI agents. Google announced the Agent Development Kit (ADK), an open-source framework designed to simplify the creation and management of AI agents. The ADK, written in Python, allows developers to build both simple and complex multi-agent systems. Complementing the ADK is Agent Garden, a collection of pre-built agent patterns and components to accelerate development. Additionally, Google introduced Agent Engine, a fully managed runtime in Vertex AI, enabling secure and reliable deployment of custom agents at a global scale.

Google is also addressing the challenge of AI agent interoperability with the introduction of the Agent2Agent (A2A) protocol. A2A is an open standard intended to provide a common language for AI agents to communicate, regardless of the frameworks or vendors used to build them. This protocol allows agents to collaborate and share information securely, streamlining workflows and reducing integration costs. The A2A initiative has garnered support from over 50 industry leaders, including Atlassian, Box, Cohere, Intuit, and Salesforce, signaling a collaborative effort to advance multi-agent systems.

These advancements are integrated within Vertex AI, Google's comprehensive platform for managing models, data, and agents. Enhancements to Vertex AI include supporting Model Context Protocol (MCP) to ensure secure data connections for agents. In addition to software advancements, Google unveiled its seventh-generation Tensor Processing Unit (TPU), named Ironwood, designed to optimize AI inferencing. Ironwood offers significantly increased compute capacity and high-bandwidth memory, further empowering AI applications within the Google Cloud ecosystem.

Recommended read:
References :
  • AI & Machine Learning: Vertex AI offers new ways to build and manage multi-agent systems
  • Thomas Roccia :verified:: Google just dropped A2A, a new protocol for agents to talk to each other.
  • Ken Yeung: Google Pushes Agent Interoperability With New Dev Kit and Agent2Agent Standard
  • techstrong.ai: Google Unfurls Raft of AI Agent Technologies at Google Cloud Next ’25
  • Analytics Vidhya: Google's new Agent2Agent (A2A) protocol enables seamless AI agent collaboration across diverse frameworks, enhancing enterprise productivity and automating complex workflows.
  • Data Analytics: Next 25 developer keynote: From prompt, to agent, to work, to fun
  • www.analyticsvidhya.com: Agent-to-Agent Protocol: Helping AI Agents Work Together Across Systems
  • TestingCatalog: Google's new Agent2Agent (A2A) protocol enables seamless AI agent collaboration across diverse frameworks, enhancing enterprise productivity and automating complex workflows.
  • www.aiwire.net: Google Cloud Preps for Agentic AI Era with ‘Ironwood’ TPU, New Models and Software
  • PCMag Middle East ai: At Google Cloud Next, We're Off to See the AI Agents (And Huge Performance Gains)
  • Ken Yeung: Google’s Customer Engagement Suite Gets Human-Like AI Agents with Voice, Emotion, and Video Support
  • bdtechtalks.com: News article on how Google’s Agent2Agent can boost AI productivity.
  • TheSequence: The Sequence Radar #531: A2A is the New Hot Protocol in Agent Land
  • www.infoq.com: News article on Google releasing the Agent2Agent protocol for agentic collaboration.

staff@insideAI News //
Google Cloud has unveiled its seventh-generation Tensor Processing Unit (TPU), named Ironwood. This custom AI accelerator is purpose-built for inference, marking a shift in Google's AI chip development strategy. While previous TPUs handled both training and inference, Ironwood is designed to optimize the deployment of trained AI models for making predictions and generating responses. According to Google, Ironwood will allow for a new "age of inference" where AI agents proactively retrieve and generate data, delivering insights and answers rather than just raw data.

Ironwood boasts impressive technical specifications. When scaled to 9,216 chips per pod, it delivers 42.5 exaflops of computing power. Each chip has a peak compute of 4,614 teraflops, accompanied by 192GB of High Bandwidth Memory. The memory bandwidth reaches 7.2 terabits per second per chip. Google highlights that Ironwood delivers twice the performance per watt compared to its predecessor and is nearly 30 times more power-efficient than Google's first Cloud TPU from 2018.

The focus on inference highlights a pivotal shift in the AI landscape. The industry has seen extensive development of large foundation models, and Ironwood is designed to manage the computational demands of these complex "thinking models," including large language models and Mixture of Experts (MoEs). Its architecture includes a low-latency, high-bandwidth Inter-Chip Interconnect (ICI) network to support coordinated communication at full TPU pod scale. The new TPU scales up to 9,216 liquid-cooled chips. This innovation is aimed at applications requiring real-time processing and predictions, and promises higher intelligence at lower costs.

Recommended read:
References :
  • insidehpc.com: Google Cloud today introduced its seventh-generation Tensor Processing Unit, "Ironwood," which the company said is it most performant and scalable custom AI accelerator and the first designed specifically for inference.
  • www.bigdatawire.com: Google Cloud Preps for Agentic AI Era with ‘Ironwood’ TPU, New Models and Software
  • www.nextplatform.com: With “Ironwood†TPU, Google Pushes The AI Accelerator To The Floor
  • insideAI News: Google today introduced its seventh-generation Tensor Processing Unit, “Ironwood,†which the company said is it most performant and scalable custom AI accelerator and the first designed specifically for inference.
  • venturebeat.com: Google's new Ironwood chip is 24x more powerful than the world's fastest supercomputer.
  • BigDATAwire: Google Cloud Preps for Agentic AI Era with ‘Ironwood’ TPU, New Models and Software
  • insidehpc.com: Google Cloud today introduced its seventh-generation Tensor Processing Unit, "Ironwood," which the company said is it most performant and scalable custom AI accelerator and the first designed specifically for inference.
  • the-decoder.com: Google unveils new AI models, infrastructure, and agent protocol at Cloud Next
  • AI News | VentureBeat: Google’s new Agent Development Kit lets enterprises rapidly prototype and deploy AI agents without recoding
  • Compute: Introducing Ironwood TPUs and new innovations in AI Hypercomputer
  • The Next Platform: With “Ironwood†TPU, Google Pushes The AI Accelerator To The Floor
  • Ken Yeung: Google Pushes Agent Interoperability With New Dev Kit and Agent2Agent Standard
  • The Tech Basic: Details Google Cloud's New AI Chip.
  • insideAI News: Google today introduced its seventh-generation Tensor Processing Unit, "Ironwood," which the company said is it most performant and scalable custom AI accelerator and the first designed specifically for inference.
  • venturebeat.com: Google unveils Ironwood TPUs, Gemini 2.5 "thinking models," and Agent2Agent protocol at Cloud Next '25, challenging Microsoft and Amazon with a comprehensive AI strategy that enables multiple AI systems to work together across platforms.
  • www.marktechpost.com: Google AI Introduces Ironwood: A Google TPU Purpose-Built for the Age of Inference
  • cloud.google.com: Introducing Ironwood TPUs and new innovations in AI Hypercomputer
  • Kyle Wiggers ?: Ironwood is Google’s newest AI accelerator chip

mpesce@Windows Copilot News //
Google is advancing its AI capabilities on multiple fronts, emphasizing both security and performance. The company is integrating Google Cloud Champion Innovators into the Google Developer Experts (GDE) program, creating a unified community of over 1,400 members. This consolidation aims to enhance collaboration, streamline resources, and amplify the impact of passionate experts, providing a stronger voice for developers within Google and the broader industry.

Google is also pushing forward with its Gemini AI model, with the plan for Gemini 2.0 to be implemented across Google's products. Researchers from Google and UC Berkeley have found that a simple test-time scaling approach, based on sampling-based search, can significantly boost the reasoning abilities of large language models (LLMs). This method uses random sampling and self-verification to improve model performance, potentially outperforming more complex and specialized training methods.

Recommended read:
References :
  • AI News | VentureBeat: Less is more: UC Berkeley and Google unlock LLM potential through simple sampling
  • Windows Copilot News: Google launched Gemini 2.0, its new AI model for practically everything
  • Security & Identity: This article discusses Mastering secure AI on Google Cloud, a practical guide for enterprises