@google.github.io
//
Google Cloud has announced the public preview of Vertex AI Agent Engine Memory Bank, a significant advancement for developers building conversational AI agents. This new managed service is designed to empower agents with long-term memory, enabling them to maintain context, personalize interactions, and remember user preferences across multiple sessions. This addresses a critical limitation in current AI agent development, where agents often "forget" previous interactions, leading to repetitive conversations and a less engaging user experience. Memory Bank aims to eliminate this by providing a persistent and up-to-date information store for agents.
The integration of Memory Bank with the Google Agent Development Kit (ADK) and support for popular frameworks like LangGraph and CrewAI are key features of this announcement. Developers can now leverage Memory Bank to create more sophisticated and stateful agents that can recall past conversations and user details, leading to more natural and efficient interactions. The service utilizes Google's powerful Gemini models to extract and manage these memories, ensuring that agents have access to relevant and accurate information. This move by Google Cloud is set to streamline the development of truly personalized and context-aware AI assistants. This release marks a crucial step forward in making AI agents more helpful and human-like. By moving beyond the limitations of solely relying on an LLM's context window, which can be expensive and inefficient, Memory Bank offers a robust solution for managing an agent's knowledge. This capability is essential for building production-ready AI agents that can handle complex user needs and provide consistent, high-quality assistance over time. The public preview availability signifies Google Cloud's commitment to providing developers with the tools needed to innovate in the rapidly evolving field of generative AI. References :
Classification:
Michael Nuñez@AI News | VentureBeat
//
Google has recently rolled out its latest Gemini 2.5 Flash and Pro models on Vertex AI, bringing advanced AI capabilities to enterprises. The release includes the general availability of Gemini 2.5 Flash and Pro, along with a new Flash-Lite model available for testing. These updates aim to provide organizations with the tools needed to build sophisticated and efficient AI solutions.
The Gemini 2.5 Flash model is designed for speed and efficiency, making it suitable for tasks such as large-scale summarization, responsive chat applications, and data extraction. Gemini 2.5 Pro handles complex reasoning, advanced code generation, and multimodal understanding. Additionally, the new Flash-Lite model offers cost-efficient performance for high-volume tasks. These models are now production-ready within Vertex AI, offering the stability and scalability needed for mission-critical applications. Google CEO Sundar Pichai has highlighted the improved performance of the Gemini 2.5 Pro update, particularly in coding, reasoning, science, and math. The update also incorporates feedback to improve the style and structure of responses. The company is also offering Supervised Fine-Tuning (SFT) for Gemini 2.5 Flash, enabling enterprises to tailor the model to their unique data and needs. A new updated Live API with native audio is also in public preview, designed to streamline the development of complex, real-time audio AI systems. References :
Classification:
@github.com
//
Google Cloud recently unveiled a suite of new generative AI models and enhancements to its Vertex AI platform, designed to empower businesses and developers. The updates, announced at Google I/O 2025, include Veo 3, Imagen 4, and Lyria 2 for media creation, and Gemini 2.5 Flash and Pro for coding and application deployment. A new platform called Flow integrates the Veo, Imagen, and Gemini models into a comprehensive platform. These advancements aim to streamline workflows, foster creativity, and simplify the development of AI-driven applications, with Google emphasizing accessibility for both technical and non-technical users.
One of the key highlights is Veo 3, Google's latest video generation model with audio capabilities. It allows users to generate videos with synchronized audio, including ambient sounds, dialogue, and environmental noise, all from text prompts. Google says Veo 3 excels at understanding complex prompts, bringing short stories to life with realistic physics and lip-syncing. According to Google Deepmind CEO Demis Hassabis, users have already generated millions of AI videos in just a few days since its launch and the surge in demand led Google to expand Veo 3 to 71 countries. The model is still unavailable in the EU, but Google says a rollout is on the way. The company has also made AI application deployment significantly easier with Cloud Run, including deploying applications built in Google AI Studio directly to Cloud Run with a single click, enabling direct deployment of Gemma 3 models from AI Studio to Cloud Run, complete with GPU support, and introducing a new Cloud Run MCP server, which empowers MCP-compatible AI agents to programmatically deploy applications. In addition to new models, Google is working to broaden access to its SynthID Detector for detecting synthetic media. Veo 3 was initially web-only, but Pro and Ultra members can now use the model in the Gemini app for Android and iOS. References :
Classification:
|
BenchmarksBlogsResearch Tools |