News from the AI & ML world

DeeperML - #googlecloudnext

@www.bigdatawire.com // 87d
Google Cloud Next 2025 has wrapped up, showcasing Google Cloud's commitment to simplifying AI adoption for enterprises. The conference highlighted advancements in integrating AI into its data analytics and database services, with a strong focus on flexibility across deployment environments. Key announcements centered around enhancing BigQuery, its flagship database for analytics, and AlloyDB, as well as leveraging the power of Vertex AI and the new Ironwood TPU-powered AI Hypercomputer for demanding AI workloads. Google Cloud emphasized making AI more accessible to businesses of all sizes.

New AI-powered tools and features were unveiled, designed to streamline data management, enhance analytics capabilities, and enable AI-driven workflows. Google Cloud is embedding pre-built AI agents into its own software services, including BigQuery, to assist with tasks such as data engineering, data science, building data pipelines, data transformation, and anomaly detection. These agents, powered by Gemini, Google’s flagship foundation model, will provide suggestions and assistance to data analysts, scientists, and engineers, transforming the way they work with data. The Agent Development Kit (ADK) supports the creation of interconnected AI agents that can communicate regardless of their vendor or framework.

These innovations mark a significant push towards agentic AI, with Google aiming to automate traditionally automation-resistant workflows. New generative AI features within AlloyDB for PostgreSQL will help data professionals build agent-ready data views without compromising privacy and security requirements. Additionally, with Gen AI Toolbox for Databases, built in partnership with LangChain, Google intends to connect GenAI applications to several Google data sources efficiently. By incorporating the Model Context Protocol (MCP) into its new Agent Development Kit, Google is helping developers integrate external data sources into their agentic solutions.

Recommended read:
References :
  • www.bigdatawire.com: Google Cloud made a slew of analytics-related announcements at its Next 2025 conference this week, including a range of enhancements to BigQuery, its flagship database for analytics.
  • www.itpro.com: Throughout its annual event, Google Cloud has emphasized the importance of simple AI adoption for enterprises and flexibility across deployment

@www.analyticsvidhya.com // 88d
Google Cloud Next '25 saw a major push into agentic AI with the unveiling of several key technologies and initiatives aimed at fostering the development and interoperability of AI agents. Google announced the Agent Development Kit (ADK), an open-source framework designed to simplify the creation and management of AI agents. The ADK, written in Python, allows developers to build both simple and complex multi-agent systems. Complementing the ADK is Agent Garden, a collection of pre-built agent patterns and components to accelerate development. Additionally, Google introduced Agent Engine, a fully managed runtime in Vertex AI, enabling secure and reliable deployment of custom agents at a global scale.

Google is also addressing the challenge of AI agent interoperability with the introduction of the Agent2Agent (A2A) protocol. A2A is an open standard intended to provide a common language for AI agents to communicate, regardless of the frameworks or vendors used to build them. This protocol allows agents to collaborate and share information securely, streamlining workflows and reducing integration costs. The A2A initiative has garnered support from over 50 industry leaders, including Atlassian, Box, Cohere, Intuit, and Salesforce, signaling a collaborative effort to advance multi-agent systems.

These advancements are integrated within Vertex AI, Google's comprehensive platform for managing models, data, and agents. Enhancements to Vertex AI include supporting Model Context Protocol (MCP) to ensure secure data connections for agents. In addition to software advancements, Google unveiled its seventh-generation Tensor Processing Unit (TPU), named Ironwood, designed to optimize AI inferencing. Ironwood offers significantly increased compute capacity and high-bandwidth memory, further empowering AI applications within the Google Cloud ecosystem.

Recommended read:
References :
  • AI & Machine Learning: Vertex AI offers new ways to build and manage multi-agent systems
  • Thomas Roccia :verified:: Google just dropped A2A, a new protocol for agents to talk to each other.
  • Ken Yeung: Google Pushes Agent Interoperability With New Dev Kit and Agent2Agent Standard
  • techstrong.ai: Google Unfurls Raft of AI Agent Technologies at Google Cloud Next ’25
  • Analytics Vidhya: Google's new Agent2Agent (A2A) protocol enables seamless AI agent collaboration across diverse frameworks, enhancing enterprise productivity and automating complex workflows.
  • Data Analytics: Next 25 developer keynote: From prompt, to agent, to work, to fun
  • www.analyticsvidhya.com: Agent-to-Agent Protocol: Helping AI Agents Work Together Across Systems
  • TestingCatalog: Google's new Agent2Agent (A2A) protocol enables seamless AI agent collaboration across diverse frameworks, enhancing enterprise productivity and automating complex workflows.
  • www.aiwire.net: Google Cloud Preps for Agentic AI Era with ‘Ironwood’ TPU, New Models and Software
  • PCMag Middle East ai: At Google Cloud Next, We're Off to See the AI Agents (And Huge Performance Gains)
  • Ken Yeung: Google’s Customer Engagement Suite Gets Human-Like AI Agents with Voice, Emotion, and Video Support
  • bdtechtalks.com: News article on how Google’s Agent2Agent can boost AI productivity.
  • TheSequence: The Sequence Radar #531: A2A is the New Hot Protocol in Agent Land
  • www.infoq.com: News article on Google releasing the Agent2Agent protocol for agentic collaboration.

@cloud.google.com // 89d
Google Cloud Next 2025 showcased a new direction for the company, focusing on an application-centric, AI-powered cloud environment for developers and operators. The conference highlighted Google's commitment to simplifying AI adoption for enterprises, emphasizing flexibility across deployments. Key announcements included AI assistance features within Gemini Code Assist and Gemini Cloud Assist, designed to streamline the application development lifecycle. These tools introduce new agents capable of handling complex workflows directly within the IDE, aiming to offload development tasks and improve overall productivity.

Google Cloud is putting applications at the center of its cloud experience, abstracting away traditional infrastructure complexities. This application-centric approach enables developers to design, observe, secure, and optimize at the application level, rather than at the infrastructure level. To support this shift, Google introduced the Application Design Center, a service designed to streamline the design, deployment, and evolution of cloud applications. The Application Design Center provides a visual, canvas-style approach to designing and modifying application templates. It also allows users to configure application templates for deployment, view infrastructure as code in-line, and collaborate with teammates on designs.

The event also highlighted Cloud Hub, a service that unifies visibility and control over applications, providing insights into deployments, health, resource optimization, and support cases. Gemini Code Assist and Cloud Assist aim to accelerate application development and streamline cloud operations by offering agents that translate natural language requests into multi-step solutions and tools for connecting Code Assist to external services. Google's vision is to make the entire application journey smarter and more productive through AI-driven assistance and simplified cloud management.

Recommended read:
References :
  • AI & Machine Learning: Delivering an application-centric, AI-powered cloud for developers and operators
  • cloud.google.com: Today we're unveiling new AI capabilities to help cloud developers and operators at every step of the application lifecycle.
  • www.itpro.com: Google Cloud Next 2025: Targeting easy AI
  • Data Analytics: Next 25 developer keynote: From prompt, to agent, to work, to fun

staff@insideAI News // 90d
Google Cloud has unveiled its seventh-generation Tensor Processing Unit (TPU), named Ironwood, at the recent Google Cloud Next 2025 conference. This new custom AI accelerator is specifically designed for inference workloads, marking a shift in Google's AI chip development strategy. Ironwood aims to meet the growing demands of "thinking models" like Gemini 2.5, addressing the increasing shift from model training to inference observed across the industry. According to Amin Vahdat, Google's Vice President and General Manager of ML, Systems, and Cloud AI, the aim is to enter the "age of inference" where AI agents proactively retrieve and generate data for insights.

Ironwood's technical specifications are impressive, offering substantial computational power and efficiency. When scaled to a pod of 9,216 chips, it can deliver 42.5 exaflops of compute, surpassing the world's fastest supercomputer, El Capitan, by more than 24 times. Each individual Ironwood chip boasts a peak compute of 4,614 teraflops. To manage the communication demands of modern AI, each Ironwood setup features Inter-Chip Interconnect (ICI) networking spanning nearly 10 MW and each chip is equipped with 192GB of High Bandwidth Memory (HBM) and memory bandwidth that reaches 7.2 terabits per second.

This focus on inference is a response to the evolving AI landscape where proactive AI agents are becoming more prevalent. Ironwood is engineered to minimize data movement and latency on-chip while executing massive tensor manipulations, crucial for handling large language models and advanced reasoning tasks. Google emphasizes that Ironwood offers twice the performance per watt compared to its predecessor, Trillium, and is nearly 30 times more power efficient than Google’s first Cloud TPU from 2018, addressing the critical need for power efficiency in modern data centers.

Recommended read:
References :
  • insideAI News: Google Launches ‘Ironwood’ 7th Gen TPU for Inference
  • venturebeat.com: Google's new Ironwood chip is 24x more powerful than the world’s fastest supercomputer
  • www.bigdatawire.com: Google Cloud Preps for Agentic AI Era with ‘Ironwood’ TPU, New Models and Software
  • The Next Platform: With “Ironwood†TPU, Google Pushes The AI Accelerator To The Floor
  • www.itpro.com: Google Cloud Next 2025: Targeting easy AI

Ken Yeung@Ken Yeung // 90d
Google has launched Agent2Agent (A2A), an open interoperability protocol designed to facilitate seamless collaboration between AI agents across diverse frameworks and vendors. This initiative, spearheaded by Google Cloud, addresses the challenge of siloed AI systems by standardizing communication, ultimately automating complex workflows and boosting enterprise productivity. The A2A protocol has garnered support from over 50 technology partners, including industry giants like Salesforce, SAP, ServiceNow, and MongoDB, signaling a broad industry interest in fostering a more cohesive AI ecosystem. A2A aims to provide a universal framework, allowing AI agents to securely exchange information, coordinate actions, and integrate across various enterprise platforms, regardless of their underlying framework or vendor.

The A2A protocol functions on core principles of capability discovery, task management, collaboration, and user experience negotiation. Agents can advertise their capabilities using JSON-formatted "Agent Cards," enabling client agents to identify the most suitable remote agent for a specific task. It facilitates lifecycle management for tasks, ensuring real-time synchronization. Built on established standards like HTTP and JSON, A2A ensures compatibility with existing systems while prioritizing security. Google has released A2A as open source, encouraging community contributions to enhance its functionality.

Enterprises are beginning to use multi-agent systems, with multiple AI agents working together, even when built on different frameworks or providers. By enabling interoperability between specialized agents, A2A addresses a critical barrier to scaling agentic AI solutions, unifying workflows and reducing integration costs. Vertex AI is enhancing its services to help move towards a multi-agent ecosystem, including Agent Development Kit, Agent2Agent Protocol, Agent Garden, and Agent Engine. The protocol aims to foster innovation in AI while ensuring trustworthiness and scalability as enterprises adopt AI agents for various tasks.

Recommended read:
References :
  • AI & Machine Learning: Vertex AI offers new ways to build and manage multi-agent systems
  • techstrong.ai: Google Unfurls Raft of AI Agent Technologies at Google Cloud Next ’25
  • Ken Yeung: Google Pushes Agent Interoperability With New Dev Kit and Agent2Agent Standard
  • Analytics Vidhya: Agent-to-Agent Protocol: Helping AI Agents Work Together Across Systems
  • TestingCatalog: Google's new Agent2Agent (A2A) protocol enables seamless AI agent collaboration across diverse frameworks, enhancing enterprise productivity and automating complex workflows.
  • TheSequence: The Sequence Radar #531: A2A is the New Hot Protocol in Agent Land
  • bdtechtalks.com: How Google’s Agent2Agent can boost AI productivity through inter-agent communication
  • www.aiwire.net: Google Cloud Preps for Agentic AI Era with ‘Ironwood’ TPU, New Models and Software
  • medium.com: Security Analysis: Potential AI Agent Hijacking via MCP and A2A Protocol Insights