Aminu Abdullahi@eWEEK
//
Google has unveiled significant advancements in its AI-driven media generation capabilities at Google I/O 2025, showcasing updates to Veo, Imagen, and Flow. The updates highlight Google's commitment to pushing the boundaries of AI in video and image creation, providing creators with new and powerful tools. A key highlight is the introduction of Veo 3, the first video generation model with integrated audio capabilities, addressing a significant challenge in AI-generated media by enabling synchronized audio creation for videos.
Veo 3 allows users to generate high-quality visuals with synchronized audio, including ambient sounds, dialogue, and environmental noise. According to Google, the model excels at understanding complex prompts, bringing short stories to life in video format with realistic physics and accurate lip-syncing. Veo 3 is currently available to Ultra subscribers in the US through the Gemini app and Flow platform, as well as to enterprise users via Vertex AI, demonstrating Google’s intent to democratize AI-driven content creation across different user segments. In addition to Veo 3, Google has launched Imagen 4 and Flow, an AI filmmaking tool, alongside major updates to Veo 2. Veo 2 is receiving enhancements with filmmaker-focused features, including the use of images as references for character and scene consistency, precise camera controls, outpainting capabilities, and object manipulation tools. Flow integrates the Veo, Imagen, and Gemini models into a comprehensive platform allowing creators to manage story elements and create content with natural language narratives, making it easier than ever to bring creative visions to life. Recommended read:
References :
@www.microsoft.com
//
Microsoft is pushing forward on multiple fronts to enhance its AI offerings, particularly within the Copilot ecosystem. Recent updates include the testing of new voices, "Birch" and "Rain," alongside a sneak peek at a fourth avatar, "Ellie," for the assistant. These additions aim to personalize the Copilot experience across Windows, web, and mobile platforms, giving it a clearer identity without fundamentally altering its core language model with each update. The new avatar, Ellie, is currently under development, and while only its background is loading, the animated figure is absent, hinting at a release window that is still undefined. These incremental avatar and voice additions are part of a broader strategy to give Copilot a clearer personality.
Microsoft's Semantic Telemetry Project is revealing insights into user engagement with AI. The data shows a strong correlation between the complexity and professional nature of tasks undertaken with AI and the likelihood of continued and increased usage. Individuals employing AI for more technical, complex, and professional tasks are more inclined to continue using the tool and to interact with it more frequently. Novice AI users tend to start with simpler tasks, but the complexity of their engagement increases over time. However, more expert users are satisfied with AI responses only where AI expertise is on par with their own expertise on the topic, while novice users had low satisfaction rates regardless of AI expertise. Furthermore, Microsoft is tackling AI model efficiency with the development of BitNet b1.58 2B4T, a 1-bit large language model (LLM) featuring two billion parameters. This model is designed to run efficiently on CPUs, even an Apple M2 chip. BitNet achieves this efficiency through its 1.58-bit weights, using only three possible values (-1, 0, and +1), significantly reducing memory requirements and computational power compared to traditional models. While BitNet’s simplicity makes it less accurate compared to larger AI models, it compensates with a massive training dataset. The model is readily available on Hugging Face, allowing experimentation with it. Recommended read:
References :
Matthias Bastian@THE DECODER
//
Google has announced significant upgrades to its Gemini app, focusing on enhanced functionality, personalization, and accessibility. A key update is the rollout of the upgraded 2.0 Flash Thinking Experimental model, now supporting file uploads and boasting a 1 million token context window for processing large-scale information. This model aims to improve reasoning and response efficiency by breaking down prompts into actionable steps. The Deep Research feature, powered by Flash Thinking, allows users to create detailed multi-page reports with real-time insights into its reasoning process and is now available globally in over 45 languages, accessible for free or with expanded access for Gemini Advanced users.
Another major addition is the experimental "Personalization" feature, integrating Gemini with Google apps like Search to deliver tailored responses based on user activity. Gemini is also strengthening its integration with Google apps such as Calendar, Notes, Tasks, and Photos, enabling users to handle complex multi-app requests in a single prompt. Google is also putting Gemini 2.0 AI into robots through the DeepMind AI team, which has developed two new models of Gemini specifically designed to work with robots. The first, Gemini Robotics, is an advanced vision-language-action (VLA) LLM that uses physical motion to respond to prompts. The second model, Gemini Robots-ER, is a VLM with advanced spatial understanding, enabling robots to navigate changing environments. Google is partnering with robotics companies to further develop humanoid robots. Google will replace its long-standing Google Assistant with Gemini on mobile devices later this year. The classic Google Assistant will no longer be accessible on most mobile devices, marking the end of an era. The shift represents Google's pivot toward generative AI, believing that Gemini's advanced AI capabilities will deliver a more powerful and versatile experience. Gemini will also come to tablets, cars, and connected devices like headphones and watches. The company also introduced Gemini Embedding, a novel embedding model initialized from the powerful Gemini Large Language Model, aiming to enhance embedding quality across diverse tasks. Recommended read:
References :
Matthias Bastian@THE DECODER
//
Google is enhancing its Gemini AI assistant with the ability to access users' Google Search history to deliver more personalized and relevant responses. This opt-in feature allows Gemini to analyze a user's search patterns and incorporate that information into its responses. The update is powered by the experimental Gemini 2.0 Flash Thinking model, which the company launched in late 2024.
This new capability, known as personalization, requires explicit user permission. Google is emphasizing transparency by allowing users to turn the feature on or off at any time, and Gemini will clearly indicate which data sources inform its personalized answers. To test the new feature Google suggests users ask about vacation spots, YouTube content ideas, or potential new hobbies. The system then draws on individual search histories to make tailored suggestions. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |