News from the AI & ML world

DeeperML - #aiupdates

Aminu Abdullahi@eWEEK //
Google has unveiled significant advancements in its AI-driven media generation capabilities at Google I/O 2025, showcasing updates to Veo, Imagen, and Flow. The updates highlight Google's commitment to pushing the boundaries of AI in video and image creation, providing creators with new and powerful tools. A key highlight is the introduction of Veo 3, the first video generation model with integrated audio capabilities, addressing a significant challenge in AI-generated media by enabling synchronized audio creation for videos.

Veo 3 allows users to generate high-quality visuals with synchronized audio, including ambient sounds, dialogue, and environmental noise. According to Google, the model excels at understanding complex prompts, bringing short stories to life in video format with realistic physics and accurate lip-syncing. Veo 3 is currently available to Ultra subscribers in the US through the Gemini app and Flow platform, as well as to enterprise users via Vertex AI, demonstrating Google’s intent to democratize AI-driven content creation across different user segments.

In addition to Veo 3, Google has launched Imagen 4 and Flow, an AI filmmaking tool, alongside major updates to Veo 2. Veo 2 is receiving enhancements with filmmaker-focused features, including the use of images as references for character and scene consistency, precise camera controls, outpainting capabilities, and object manipulation tools. Flow integrates the Veo, Imagen, and Gemini models into a comprehensive platform allowing creators to manage story elements and create content with natural language narratives, making it easier than ever to bring creative visions to life.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • Data Phoenix: Google updated its model lineup and introduced a 'Deep Think' reasoning mode for Gemini 2.5 Pro
  • Maginative: Google’s revamped Canvas, powered by the Gemini 2.5 Pro model, lets you turn ideas into apps, quizzes, podcasts, and visuals in seconds—no code required.
  • Replicate's blog: Generate incredible images with Google's Imagen-4
  • AI News | VentureBeat: At Google I/O, Sergey Brin makes surprise appearance — and declares Google will build the first AGI
  • www.tomsguide.com: I just tried Google’s smart glasses built on Android XR — and Gemini is the killer feature
  • Data Phoenix: Google has launched major Gemini updates, including free visual assistance via Gemini Live, new subscription tiers starting at $19.99/month, advanced creative tools like Veo 3 for video generation with native audio, and an upcoming autonomous Agent Mode for complex task management.
  • sites.libsyn.com: Google's VEO 3 Is Next Gen AI Video, Gemini Crushes at Google I/O & OpenAI's Big Bet on Jony Ive
  • eWEEK: Google’s Co-Founder in Office ‘Pretty Much Every Day’ to Work on AI
  • learn.aisingapore.org: Advancing Gemini’s security safeguards – Google DeepMind
  • Google DeepMind Blog: Gemini 2.5: Our most intelligent models are getting even better
  • TestingCatalog: Opus 4 outperforms GPT-4.1 and Gemini 2.5 Pro in coding benchmarks
  • AI Talent Development: Updates to Gemini 2.5 from Google DeepMind
  • pub.towardsai.net: This week, Google’s flagship I/O 2025 conference and Anthropic’s Claude 4 release delivered further advancements in AI reasoning, multimodal and coding capabilities, and somewhat alarming safety testing results.
  • learn.aisingapore.org: Updates to Gemini 2.5 from Google DeepMind
  • Data Phoenix: Google announced several updates across its media generation models
  • thezvi.wordpress.com: Fun With Veo 3 and Media Generation
  • Maginative: Google Gemini Can Now Watch Your Videos on Google Drive
  • www.marktechpost.com: A Coding Guide for Building a Self-Improving AI Agent Using Google’s Gemini API with Intelligent Adaptation Features
Classification:
@www.microsoft.com //
Microsoft is pushing forward on multiple fronts to enhance its AI offerings, particularly within the Copilot ecosystem. Recent updates include the testing of new voices, "Birch" and "Rain," alongside a sneak peek at a fourth avatar, "Ellie," for the assistant. These additions aim to personalize the Copilot experience across Windows, web, and mobile platforms, giving it a clearer identity without fundamentally altering its core language model with each update. The new avatar, Ellie, is currently under development, and while only its background is loading, the animated figure is absent, hinting at a release window that is still undefined. These incremental avatar and voice additions are part of a broader strategy to give Copilot a clearer personality.

Microsoft's Semantic Telemetry Project is revealing insights into user engagement with AI. The data shows a strong correlation between the complexity and professional nature of tasks undertaken with AI and the likelihood of continued and increased usage. Individuals employing AI for more technical, complex, and professional tasks are more inclined to continue using the tool and to interact with it more frequently. Novice AI users tend to start with simpler tasks, but the complexity of their engagement increases over time. However, more expert users are satisfied with AI responses only where AI expertise is on par with their own expertise on the topic, while novice users had low satisfaction rates regardless of AI expertise.

Furthermore, Microsoft is tackling AI model efficiency with the development of BitNet b1.58 2B4T, a 1-bit large language model (LLM) featuring two billion parameters. This model is designed to run efficiently on CPUs, even an Apple M2 chip. BitNet achieves this efficiency through its 1.58-bit weights, using only three possible values (-1, 0, and +1), significantly reducing memory requirements and computational power compared to traditional models. While BitNet’s simplicity makes it less accurate compared to larger AI models, it compensates with a massive training dataset. The model is readily available on Hugging Face, allowing experimentation with it.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • Microsoft Copilot Blog: Release Notes: April 16, 2025
  • THE DECODER: BitNet: Microsoft shows how to put AI models on a diet
  • www.microsoft.com: Semantic Telemetry Project data show that people who use AI for more professional and complex tasks are more likely to keep using the tool and to use it more often. Novice AI users engage in simpler tasks, but their usage is becoming more complex.
  • www.artificiallawyer.com: Judicial office holders in the UK are being encouraged to make use of Microsoft’s ‘Copilot Chat’ genAI capability via their inhouse eJudiciary platform.
Classification: