@github.com
//
Google Cloud recently unveiled a suite of new generative AI models and enhancements to its Vertex AI platform, designed to empower businesses and developers. The updates, announced at Google I/O 2025, include Veo 3, Imagen 4, and Lyria 2 for media creation, and Gemini 2.5 Flash and Pro for coding and application deployment. A new platform called Flow integrates the Veo, Imagen, and Gemini models into a comprehensive platform. These advancements aim to streamline workflows, foster creativity, and simplify the development of AI-driven applications, with Google emphasizing accessibility for both technical and non-technical users.
One of the key highlights is Veo 3, Google's latest video generation model with audio capabilities. It allows users to generate videos with synchronized audio, including ambient sounds, dialogue, and environmental noise, all from text prompts. Google says Veo 3 excels at understanding complex prompts, bringing short stories to life with realistic physics and lip-syncing. According to Google Deepmind CEO Demis Hassabis, users have already generated millions of AI videos in just a few days since its launch and the surge in demand led Google to expand Veo 3 to 71 countries. The model is still unavailable in the EU, but Google says a rollout is on the way. The company has also made AI application deployment significantly easier with Cloud Run, including deploying applications built in Google AI Studio directly to Cloud Run with a single click, enabling direct deployment of Gemma 3 models from AI Studio to Cloud Run, complete with GPU support, and introducing a new Cloud Run MCP server, which empowers MCP-compatible AI agents to programmatically deploy applications. In addition to new models, Google is working to broaden access to its SynthID Detector for detecting synthetic media. Veo 3 was initially web-only, but Pro and Ultra members can now use the model in the Gemini app for Android and iOS. Recommended read:
References :
Derek Egan@AI & Machine Learning
//
Google Cloud is enhancing its MCP Toolbox for Databases to provide simpler and more secure access to enterprise data for AI agents. Announced at Google Cloud Next 2025, this update includes support for Model Context Protocol (MCP), an emerging open standard developed by Anthropic, which aims to standardize how AI systems connect to various data sources. The MCP Toolbox for Databases, formerly known as the Gen AI Toolbox for Databases, acts as an open-source MCP server, allowing developers to connect GenAI agents to enterprise databases like AlloyDB for PostgreSQL, Spanner, and Cloud SQL securely and efficiently.
The enhanced MCP Toolbox for Databases reduces boilerplate code, improves security through OAuth2 and OIDC, and offers end-to-end observability via OpenTelemetry integration. These features simplify the development process, allowing developers to build agents with the Agent Development Kit (ADK). The ADK, an open-source framework, supports the full lifecycle of intelligent agent development, from prototyping and evaluation to production deployment. ADK provides deterministic guardrails, bidirectional audio and video streaming capabilities, and a direct path to production deployment via Vertex AI Agent Engine. This update represents a significant step forward in creating secure and standardized methods for AI agents to communicate with one another and access enterprise data. Because the Toolbox is fully open-source, it includes contributions from third-party databases such as Neo4j and Dgraph. By supporting MCP, the Toolbox enables developers to leverage a single, standardized protocol to query a wide range of databases, enhancing interoperability and streamlining the development of agentic applications. New customers can also leverage Google Cloud's offer of $300 in free credit to begin building and testing their AI solutions. Recommended read:
References :
@www.bigdatawire.com
//
References:
BigDATAwire
, Maginative
Google Cloud Next 2025 featured a series of announcements focused on enhancing data analytics capabilities within Google Cloud, particularly through advancements to BigQuery. These enhancements center around a vision for AI-native data analytics, aiming to make data work more conversational, contextual, and intelligent. Key innovations include the introduction of unified governance in BigQuery, AI-powered agents for data engineering and data science tasks, and the integration of Gemini, Google's flagship foundation model, to drive these intelligent capabilities. These developments are designed to simplify data management, improve data quality, and accelerate the generation of AI-driven insights for businesses.
The new intelligent unified governance in BigQuery is designed to help organizations discover, understand, and leverage their data assets more effectively. This includes a universal, AI-powered data catalog that natively integrates Dataplex, BigQuery sharing, security, and metastore capabilities. The unified governance brings together business, technical, and runtime metadata, providing end-to-end data-to-AI lineage, data profiling, insights, and secure sharing. A universal semantic search allows users to find the right data by asking questions in natural language. These advancements promise to transform governance from a burden into a powerful tool for data activation, simplifying data and AI management. A significant aspect of these enhancements is the introduction of specialized AI agents within BigQuery and Looker. These agents are tailored to different user roles, such as data engineers and business analysts, assisting with tasks like building data pipelines, model development, and querying data in plain English. Powered by Gemini, these agents provide suggestions based on information collected through the new BigQuery Knowledge Engine, which understands schema relationships, business terms, and query history. These agents are designed to make more data available to more people without them doing more work, ultimately transforming how data professionals interact with data. Recommended read:
References :
Ken Yeung@Ken Yeung
//
References:
TheSequence
, TestingCatalog
,
Google has launched the Agent2Agent (A2A) protocol, a groundbreaking open interoperability framework designed to facilitate seamless collaboration between AI agents across different platforms and vendors. This initiative addresses the challenges posed by siloed AI systems by establishing a standardized method for agents to communicate, coordinate actions, and securely exchange information. A2A aims to streamline complex workflows, improve productivity, and foster a dynamic ecosystem where AI agents operate as composable primitives in enterprise-scale operations.
The A2A protocol is built upon key principles, including capability discovery, task management, collaboration, and user experience negotiation. Agents can publish their capabilities using JSON-formatted "Agent Cards," which allows client agents to identify the most appropriate remote agent for a given task. The protocol supports the complete lifecycle management of tasks, enabling real-time synchronization between agents. By leveraging established standards like HTTP and JSON-RPC, A2A ensures compatibility with existing infrastructure while prioritizing security with built-in authentication and authorization mechanisms. The protocol is also modality-agnostic, accommodating text, audio, video, and embedded UI components. Google envisions A2A as a foundational layer for future AI systems, promoting collaboration and interoperability across various environments. The company has released A2A as open source, inviting the broader community to contribute to its refinement and expansion. This approach aligns with Google's strategy of fostering innovation in AI, ensuring trustworthiness, and promoting scalability. Industry experts believe A2A represents a significant step toward realizing the full potential of multi-agent ecosystems, particularly as enterprises increasingly adopt AI agents for diverse tasks, from customer service to supply chain management. Recommended read:
References :
Ken Yeung@Ken Yeung
//
Google has launched Agent2Agent (A2A), an open interoperability protocol designed to facilitate seamless collaboration between AI agents across diverse frameworks and vendors. This initiative, spearheaded by Google Cloud, addresses the challenge of siloed AI systems by standardizing communication, ultimately automating complex workflows and boosting enterprise productivity. The A2A protocol has garnered support from over 50 technology partners, including industry giants like Salesforce, SAP, ServiceNow, and MongoDB, signaling a broad industry interest in fostering a more cohesive AI ecosystem. A2A aims to provide a universal framework, allowing AI agents to securely exchange information, coordinate actions, and integrate across various enterprise platforms, regardless of their underlying framework or vendor.
The A2A protocol functions on core principles of capability discovery, task management, collaboration, and user experience negotiation. Agents can advertise their capabilities using JSON-formatted "Agent Cards," enabling client agents to identify the most suitable remote agent for a specific task. It facilitates lifecycle management for tasks, ensuring real-time synchronization. Built on established standards like HTTP and JSON, A2A ensures compatibility with existing systems while prioritizing security. Google has released A2A as open source, encouraging community contributions to enhance its functionality. Enterprises are beginning to use multi-agent systems, with multiple AI agents working together, even when built on different frameworks or providers. By enabling interoperability between specialized agents, A2A addresses a critical barrier to scaling agentic AI solutions, unifying workflows and reducing integration costs. Vertex AI is enhancing its services to help move towards a multi-agent ecosystem, including Agent Development Kit, Agent2Agent Protocol, Agent Garden, and Agent Engine. The protocol aims to foster innovation in AI while ensuring trustworthiness and scalability as enterprises adopt AI agents for various tasks. Recommended read:
References :
@www.analyticsvidhya.com
//
Google Cloud Next '25 saw a major push into agentic AI with the unveiling of several key technologies and initiatives aimed at fostering the development and interoperability of AI agents. Google announced the Agent Development Kit (ADK), an open-source framework designed to simplify the creation and management of AI agents. The ADK, written in Python, allows developers to build both simple and complex multi-agent systems. Complementing the ADK is Agent Garden, a collection of pre-built agent patterns and components to accelerate development. Additionally, Google introduced Agent Engine, a fully managed runtime in Vertex AI, enabling secure and reliable deployment of custom agents at a global scale.
Google is also addressing the challenge of AI agent interoperability with the introduction of the Agent2Agent (A2A) protocol. A2A is an open standard intended to provide a common language for AI agents to communicate, regardless of the frameworks or vendors used to build them. This protocol allows agents to collaborate and share information securely, streamlining workflows and reducing integration costs. The A2A initiative has garnered support from over 50 industry leaders, including Atlassian, Box, Cohere, Intuit, and Salesforce, signaling a collaborative effort to advance multi-agent systems. These advancements are integrated within Vertex AI, Google's comprehensive platform for managing models, data, and agents. Enhancements to Vertex AI include supporting Model Context Protocol (MCP) to ensure secure data connections for agents. In addition to software advancements, Google unveiled its seventh-generation Tensor Processing Unit (TPU), named Ironwood, designed to optimize AI inferencing. Ironwood offers significantly increased compute capacity and high-bandwidth memory, further empowering AI applications within the Google Cloud ecosystem. Recommended read:
References :
staff@insideAI News
//
Google Cloud has unveiled its seventh-generation Tensor Processing Unit (TPU), named Ironwood. This custom AI accelerator is purpose-built for inference, marking a shift in Google's AI chip development strategy. While previous TPUs handled both training and inference, Ironwood is designed to optimize the deployment of trained AI models for making predictions and generating responses. According to Google, Ironwood will allow for a new "age of inference" where AI agents proactively retrieve and generate data, delivering insights and answers rather than just raw data.
Ironwood boasts impressive technical specifications. When scaled to 9,216 chips per pod, it delivers 42.5 exaflops of computing power. Each chip has a peak compute of 4,614 teraflops, accompanied by 192GB of High Bandwidth Memory. The memory bandwidth reaches 7.2 terabits per second per chip. Google highlights that Ironwood delivers twice the performance per watt compared to its predecessor and is nearly 30 times more power-efficient than Google's first Cloud TPU from 2018. The focus on inference highlights a pivotal shift in the AI landscape. The industry has seen extensive development of large foundation models, and Ironwood is designed to manage the computational demands of these complex "thinking models," including large language models and Mixture of Experts (MoEs). Its architecture includes a low-latency, high-bandwidth Inter-Chip Interconnect (ICI) network to support coordinated communication at full TPU pod scale. The new TPU scales up to 9,216 liquid-cooled chips. This innovation is aimed at applications requiring real-time processing and predictions, and promises higher intelligence at lower costs. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |