News from the AI & ML world

DeeperML - #videogeneration

Michael Nuñez@venturebeat.com //
Google has unveiled Gemini 2.5 Flash, a new AI model designed to give businesses greater control over AI costs and performance. Available in preview through Google AI Studio and Vertex AI, Gemini 2.5 Flash introduces adjustable "thinking budgets," allowing developers to specify the amount of computational power the AI should use for reasoning. This innovative approach aims to strike a balance between advanced AI capabilities and cost-efficiency, addressing a key concern for businesses integrating AI into their operations. The model is also capable of generating SVGs.

The introduction of "thinking budgets" marks a strategic move by Google to deliver cost-effective AI solutions. Developers can now fine-tune the AI's processing power, allocating resources based on the complexity of the task at hand. With Gemini 2.5 Flash, the "thinking" capability can be turned on or off, creating a hybrid reasoning model that prioritizes speed and cost when needed. This flexibility allows businesses to optimize their AI usage and pay only for the brainpower they require.

Benchmarks demonstrate significant improvements in Gemini 2.5 Flash compared to the older Gemini 2.0 Flash model. Google has stated that the latest version delivers a major upgrade in reasoning capabilities, while still prioritizing speed and cost. The "thinking budget" feature offers fine-grained control over the maximum number of tokens a model can generate while thinking, ranging from 0 to 24,576 tokens. A higher budget allows the model to reason further to improve quality, but the model automatically decides how much to think based on the perceived task complexity.

Recommended read:
References :
  • venturebeat.com: Google’s new Gemini 2.5 Flash AI model introduces adjustable "thinking budgets" that let businesses pay only for the reasoning power they need, balancing advanced capabilities with cost efficiency.
  • Google DeepMind Blog: Transform text-based prompts into high-resolution eight-second videos in Gemini Advanced and use Whisk Animate to turn images into eight-second animated clips.
  • TestingCatalog: Google integrates Veo 2 AI into Gemini Advanced, enabling subscribers to create 8-second, 720p videos for TikTok and YouTube. Download MP4s with SynthID watermark.
  • Simon Willison's Weblog: Start building with Gemini 2.5 Flash
  • www.zdnet.com: Google reveals Gemini 2.5 Flash, its 'most cost-efficient thinking model'
  • developers.googleblog.com: Google's Gemini 2.5 Flash has hybrid reasoning, can be turned on or off and provides the ability for developers to set budgets to find the right trade-off between cost, quality, and latency.
  • venturebeat.com: Google’s Gemini 2.5 Flash introduces ‘thinking budgets’ that cut AI costs by 600% when turned down
  • the-decoder.com: Google’s Gemini 2.5 Flash gives you speed when you need it and reasoning when you can afford it
  • THE DECODER: Provides information about the release of Gemini 2.5 Flash, highlighting its reasoning capabilities and cost-effectiveness.
  • TestingCatalog: Google launches Gemini 2.5 Flash model with hybrid reasoning
  • bsky.app: New LLM release from Google Gemini: Gemini 2.5 Flash (preview), which lets you set a budget for how many "thinking" tokens it can use. I got it to draw me some pelicans - it has very good taste in SVG styles and comments.
  • www.marktechpost.com: Google Unveils Gemini 2.5 Flash in Preview through the Gemini API via Google AI Studio and Vertex AI.
  • LearnAI: Start building with Gemini 2.5 Flash
  • www.infoworld.com: Google previews Gemini 2.5 Flash hybrid reasoning model
  • MarkTechPost: Google Unveils Gemini 2.5 Flash in Preview through the Gemini API via Google AI Studio and Vertex AI.
  • Google DeepMind Blog: Gemini 2.5 Flash is our first fully hybrid reasoning model, giving developers the ability to turn thinking on or off.
  • learn.aisingapore.org: Start building with Gemini 2.5 Flash
  • www.marketingaiinstitute.com: This blog post highlights Google Cloud Next '25 event reveals, including Gemini 2.5 Pro, AI Agents, and more.
  • bsky.app: Gemini 2.5 Pro and Flash now have the ability to return image segmentation masks on command, as base64 encoded PNGs embedded in JSON strings I vibe coded an interactive tool for exploring this new capability - it costs a fraction of a cent per image
  • Last Week in AI: Last Week in AI discussing GPT 4.1 and Gemini 2.5 Flash
  • TestingCatalog: Testing Catalog about Gemini’s Scheduled Actions may offer AI task scheduling
  • The Official Google Blog: This model allows for adjustable thinking budgets, enabling users to control costs and choose the level of reasoning needed for specific tasks.
  • simonwillison.net: The model also allows developers to set thinking budgets to find the right tradeoff between quality, cost, and latency. Gemini AI Studio product lead Logan Kilpatrick :
  • Analytics Vidhya: 7 Things Gemini 2.5 Pro Does Better Than Any Other Chatbot!
  • Last Week in AI: OpenAI’s new GPT-4.1 AI models focus on coding, Google’s newest Gemini AI model focuses on efficiency, and more!
  • Simon Willison: Turns out Gemini 2.5 Flash non-thinking mode can do the same trick at an even lower cost... 0.0119 cents (around 1/100th of a cent) Notes here, including how I upgraded my tool to use the non-thinking model by vibe coding o4-mini:
  • techcrunch.com: Google’s newest Gemini AI model focuses on efficiency, and more!
  • www.analyticsvidhya.com: o3 vs o4-mini vs Gemini 2.5 pro: The Ultimate Reasoning Battle
  • Digital Information World: Google launches Gemini 2.5 Flash model with hybrid reasoning, multimodal support, and cost-effective token pricing.
  • IEEE Spectrum: This article discusses the release of Google's new leading-edge LLM, Gemini 2.5 Pro, which has attracted much attention and interest.
  • www.analyticsvidhya.com: This article explores the capabilities of Gemini 2.5 Pro and compares it to other AI chatbots.
  • Analytics Vidhya: o3 vs o4-mini vs Gemini 2.5 pro: The Ultimate Reasoning Battle
  • TechHQ: Google unveils “reasoning dial†for Gemini 2.5 flash: thinking vs. cost
  • techhq.com: Google unveils “reasoning dial†for Gemini 2.5 flash: thinking vs. cost
  • Last Week in AI: Last Week in AI #307 - GPT 4.1, o3, o4-mini, Gemini 2.5 Flash, Veo 2
  • Towards AI: Google's Gemini 2.5 Flash model with reasoning control allows for greater precision and control in AI applications, optimizing resources and cost.
  • www.artificialintelligence-news.com: Google's Gemini 2.5 Flash model features a "thinking budget" that allows developers to restrict processing power for problem-solving, addressing concerns about excessive resource consumption.
  • AI News: Google has introduced an AI reasoning control mechanism for its Gemini 2.5 Flash model that allows developers to limit how much processing power the system expends on problem-solving. Released on April 17, this “thinking budget†feature responds to a growing industry challenge: advanced AI models frequently overanalyse straightforward queries, consuming unnecessary computational resources and driving

@Google DeepMind Blog //
Google has integrated its Veo 2 AI model into Gemini Advanced, allowing subscribers to generate 8-second, 720p videos directly from text prompts. Gemini Advanced users can now select Veo 2 to create dynamic videos, which can be shared on platforms like TikTok and YouTube. These videos are downloaded as MP4 files with a SynthID watermark, ensuring authenticity and traceability. This integration is currently available to Gemini Advanced subscribers and does not extend to Google Workspace business and educational plans.

Google is also adding Veo 2 to Whisk, an experimental tool in Google Labs, where users can create videos from image prompts using the new Whisk Animate feature. With Veo 2, users can create detailed videos with cinematic realism from text prompts. The model is designed to better understand real-world physics and human motion, delivering fluid character movement, lifelike scenes, and finer visual details across diverse subjects and styles. You can create up to eight-second-long video clips in a 720p resolution, which will then generate an MP4 in a 16:9 aspect ratio.

The new video model allows you to create what Google calls “detailed videos with cinematic realism” from text prompts. You can create up to eight-second-long video clips in a 720p resolution, which will then generate an MP4 in a 16:9 aspect ratio."By better understanding real-world physics and human motion, it delivers fluid character movement, lifelike scenes, and finer visual details across diverse subjects and styles," Google says. Veo 2 was previously available in early access, allowing users to create 1080p video for 50 cents per second of video generated. Clips are now free to produce for those on Advanced plans, but as with most AI video-generation tools, there’s a limit to how many you can request each month. Google didn't share that limit; it says it will tell users as they approach it. Along with the Veo 2 video tool, Google is also introducing Whisk Animate, which allows you to make your images into 8-second videos using the same tech as Veo 2. This feature isn't as readily available as Veo 2, but if you're in the US, you can access it through Google Labs.

Recommended read:
References :
  • Google DeepMind Blog: Transform text-based prompts into high-resolution eight-second videos in Gemini Advanced and use Whisk Animate to turn images into eight-second animated clips.
  • LearnAI: Starting today, Gemini Advanced users can generate and share videos using our state-of-the-art video model, Veo 2. In Gemini, you can now translate text-based prompts into dynamic videos.
  • PCMag Middle East ai: Google Gemini Advanced Now Lets You Generate 8-Second Video Clips
  • TestingCatalog: Google integrates Veo 2 AI into Gemini Advanced, enabling subscribers to create 8-second, 720p videos for TikTok and YouTube. Download MP4s with SynthID watermark.
  • Shelly Palmer: Google is just about to drop Veo, a video generation model that can create high-quality 1080p footage from text, image, and video prompts. Announced at Google I/O, Veo outputs cinematic shots with accurate physics, realistic motion, and a surprising grasp of visual storytelling — all from a short prompt. The post originally appeared here on
  • eWEEK: Gemini Advanced users can now create and share high-resolution videos with its newly released Veo 2. The AI video generator Veo 2 lets users generate a cinema-quality eight-second, 720p video delivered as an MP4 file in a 16:9 landscape. Veo 2 understands real-world physics and human motion better, which enables it to deliver “fluid character […] The post appeared first on .
  • Data Phoenix: Google introduces Veo 2 for video generation in Gemini and Whisk

@Google DeepMind Blog //
Google is integrating its Veo 2 video-generating AI model into Gemini Advanced, allowing subscribers to create short, cinematic videos from text prompts. The new feature, launched on April 15, 2025, enables Gemini Advanced users to generate 8-second, 720p videos in a 16:9 aspect ratio, suitable for sharing on platforms like TikTok and YouTube. These videos can be downloaded as MP4 files and include Google's SynthID watermark, ensuring transparency regarding AI-generated content. Currently, this offering is exclusively for Google One AI Premium subscribers and does not extend to Google Workspace business and educational plans.

Veo 2 is also being integrated into Whisk, an experimental tool within Google Labs. This integration includes a new feature called "Whisk Animate" that transforms uploaded images into animated video clips, also utilizing the Veo 2 model. Similar to Gemini, the video output in Whisk is limited to eight seconds and is accessible only to Premium subscribers. The integration of Veo 2 into Gemini Advanced and Whisk represents Google's efforts to compete with other AI video generation platforms.

Google's Veo 2 is designed to turn detailed text prompts into cinematic-quality videos with lifelike motion, natural physics, and visually rich scenes. The system is able to interpret detailed text prompts and turn them into fully animated clips with lifelike elements and a strong visual narrative. To ensure responsible use and transparency, Google employs its proprietary SynthID technology, which embeds an invisible watermark into each video frame. The company also implements red-teaming and additional review processes to prevent the creation of content that violates its content policies. The new video generation features are being rolled out globally and support all languages currently available in Gemini.

Recommended read:
References :
  • Google DeepMind Blog: Generate videos in Gemini and Whisk with Veo 2
  • PCMag Middle East ai: With Veo 2, videos are now free to produce for those on Advanced plans. The Whisk Animate tool also allows you to make images into 8-second videos using the same technology.
  • TestingCatalog: Gemini Advanced subscribers can now generate videos with Veo 2
  • THE DECODER: Google adds AI video generation to Gemini app and Whisk experiment
  • Analytics Vidhya: 3 Ways to Access Google Veo 2
  • www.tomsguide.com: I just tried Google's newest AI video generation features — and I'm blown away
  • www.analyticsvidhya.com: 3 Ways to Access Google Veo 2
  • LearnAI: Starting today, Gemini Advanced users can generate and share videos using our state-of-the-art video model, Veo 2. In Gemini, you can now translate text-based prompts into dynamic videos. Google Labs is also making Veo 2 available through Whisk, a generative AI experiment that allows you to create new images using both text and image prompts,...
  • www.tomsguide.com: Google rolls out Google Photos extension for Gemini — here’s what it can do
  • eWEEK: Gemini Advanced users can now create and share high-resolution videos with its newly released Veo 2.
  • Data Phoenix: Google introduces Veo 2 for video generation in Gemini and Whisk

@Google DeepMind Blog //
Google is expanding its AI video generation capabilities by integrating Veo 2, its most advanced generative video model, into the Gemini app and the experimental Whisk platform. This new functionality allows users to create short, high-resolution videos directly from text prompts, opening up new avenues for creative expression and content creation. Veo 2 is designed to produce realistic motion, natural physics, and visually rich scenes, making it a powerful tool for generating cinematic-quality content.

Currently, access to Veo 2 is primarily available to Google One AI Premium subscribers, who can generate eight-second, 720p videos in MP4 format within Gemini Advanced. The Whisk platform also incorporates Veo 2 through its "Whisk Animate" feature, enabling users to transform uploaded images into animated video clips. Google emphasizes that more detailed and descriptive text prompts generally yield better results, allowing users to fine-tune their creations and explore a wide range of styles, from realistic nature scenes to stylized and surreal sequences.

To ensure responsible AI development, Google is implementing several safeguards. All AI-generated videos created with Veo 2 will feature an invisible watermark embedded using SynthID technology, helping to identify them as AI-generated. Additionally, Google is employing red-teaming and review processes to prevent the creation of content that violates its policies. These new video generation features are being rolled out globally and support all languages currently available in Gemini, although standard Gemini users do not have access at this time.

Recommended read:
References :
  • The Official Google Blog: Video showcasing how you can generate videos in Gemini
  • chromeunboxed.com: Google has announced a significant upgrade to its AI video generation capabilities, integrating the powerful Veo 2 model into both Gemini Advanced and Whisk.
  • Google DeepMind Blog: Transform text-based prompts into high-resolution eight-second videos in Gemini Advanced and use Whisk Animate to turn images into eight-second animated clips.
  • PCMag Middle East ai: A new model called DolphinGemma can analyze sounds and put together sequences, accelerating decades-long research projects. Google is collaborating with researchers to learn how to decode dolphin vocalizations "in the quest for interspecies communication."
  • www.tomsguide.com: I just tried Google's newest AI video generation features — and I'm blown away
  • blog.google: Google's DolphinGemma AI model aims to decode dolphin communication, potentially leading to interspecies communication.
  • PCMag Middle East ai: Google's Gemini Advanced now offers free 8-second video clip generation with Veo 2, and image-to-video animation with Whisk Animate.
  • www.analyticsvidhya.com: Google's new Veo 2 model lets you create cinematic-quality videos from detailed text prompts.
  • www.artificialintelligence-news.com: Google's AI model, DolphinGemma, is designed to interpret and generate dolphin sounds, potentially paving the way for interspecies communication.
  • THE DECODER: Google adds AI video generation to Gemini app and Whisk experiment
  • TestingCatalog: Perplexity adds Gemini 2.5 Pro and voice mode to web platform
  • LearnAI: Try generating video in Gemini, powered by Veo 2
  • TestingCatalog: Gemini Advanced subscribers can now generate videos with Veo 2
  • Analytics Vidhya: Designed to turn detailed text prompts into cinematic-quality videos, Google Veo 2 creates lifelike motion, natural physics, and visually rich scenes across a range of styles. Currently, Google Veo 2 is available only to users in the United States, aged 18 and […] The post appeared first on .
  • Analytics India Magazine: Google Rolls Out Video AI Model for Gemini Users, Developers
  • shellypalmer.com: Google’s Veo is Almost Here
  • www.tomsguide.com: Google rolls out Google Photos extension for Gemini — here’s what it can do
  • venturebeat.com: VentureBeat reports on Google’s Gemini 2.5 Flash introduces adjustable ‘thinking budgets’ that cut AI costs by 600% when turned down
  • eWEEK: Google’s AI Video Generator Veo 2 Delivers Cinematic Results
  • TestingCatalog: Google launches Gemini 2.5 Flash model with hybrid reasoning
  • the-decoder.com: Google is rolling out new AI-powered video generation features in its Gemini app and the experimental tool Whisk.
  • Glenn Gabe: Smart move by Google. They are offering Google One AI Premium for FREE to college students through the spring of 2026 Gives you access to 2 TB of storage and incredible AI models, like Gemini 2.5 Pro and Veo 2, via these products: *Gemini Advanced, including Deep Research, Gemini Live, Canvas, and video generation with Veo 2 *NotebookLM Plus, including five times more Audio Overviews, notebooks and more *Gemini in Google Docs, Sheets and Slides
  • bsky.app: Gemini 2.5 Pro and Flash now have the ability to return image segmentation masks on command, as base64 encoded PNGs embedded in JSON strings I vibe coded an interactive tool for exploring this new capability - it costs a fraction of a cent per image https://simonwillison.net/2025/Apr/18/gemini-image-segmentation/
  • Google DeepMind Blog: Introducing Gemini 2.5 Flash
  • www.marketingaiinstitute.com: Google Cloud just wrapped its Next ‘25 event in Las Vegas, , spanning everything from advanced AI models to new ways of connecting your favorite tools with Google’s agentic ecosystem.
  • aigptjournal.com: Google Veo 2: The Future of Effortless AI Video Creation for Everyone
  • Last Week in AI: LWiAI Podcast #207 - GPT 4.1, Gemini 2.5 Flash, Ironwood, Claude Max
  • learn.aisingapore.org: Introducing Gemini 2.5 Flash
  • Data Phoenix: Google introduces Veo 2 for video generation in Gemini and Whisk

Jonathan Kemper@THE DECODER //
References: THE DECODER , MarkTechPost ,
Meta and University of Waterloo researchers have developed MoCha, an AI system capable of generating complete character animations with synchronized speech and natural movements. Unlike previous models that primarily focused on facial animation, MoCha can render full-body movements from various camera angles. This includes lip synchronization, gestures, and interactions between multiple characters.

Early demonstrations of MoCha feature close-up and semi-close-up shots, showcasing the system's ability to generate upper body movements and gestures that align with spoken dialogue. MoCha utilizes a diffusion transformer model with 30 billion parameters to produce HD video clips approximately five seconds long at 24 frames per second. For scenes with multiple characters, users can define characters once and refer to them with simple tags, streamlining the prompting process.

Recommended read:
References :
  • THE DECODER: Meta and University of Waterloo researchers have built MoCha, an AI system that generates complete character animations with synchronized speech and natural movements.
  • MarkTechPost: Salesforce AI Introduce BingoGuard: An LLM-based Moderation System Designed to Predict both Binary Safety Labels and Severity Levels.
  • Salesforce: Salesforce believes that AI agents are only as good as the data they’re able to access, but many organizations are still struggling to bring it all together.

Kara Sherrer@eWEEK //
Runway AI Inc. has launched Gen-4, its latest AI video generation model, addressing the significant challenge of maintaining consistent characters and objects across different scenes. This new model represents a considerable advancement in AI video technology and improves the realism and usability of AI-generated videos. Gen-4 allows users to upload a reference image of an object to be included in a video, along with design instructions, and ensures that the object maintains a consistent look throughout the entire clip.

The Gen-4 model empowers users to place any object or subject in different locations while maintaining consistency, and even allows for modifications such as changing camera angles or lighting conditions. The model combines visual references with text instructions to preserve styles throughout videos. Gen-4 is currently available to paying subscribers and Enterprise customers, with additional features planned for future updates.

Recommended read:
References :
  • Analytics India Magazine: Runway introduces its Next-Gen Image-to-Video Generation AI Model
  • SiliconANGLE: Runway launches new Gen-4 AI video generator
  • THE DECODER: Runway releases Gen-4 video model with focus on consistency
  • venturebeat.com: Runway Gen-4 solves AI video’s biggest problem: character consistency across scenes
  • www.producthunt.com: Product Hunt page for Runway Gen-4.
  • eWEEK: The Gen-4 model aims to solve several problems with AI video generation including inconsistent characters and objects.
  • iThinkDifferent: Runway has released Gen-4, its latest AI model for video generation. The company says the system addresses one of the biggest challenges in AI video generation: maintaining consistent characters and objects throughout scenes.
  • Charlie Fink: Runway’s Gen-4 release overshadows OpenAI’s image upgrade as Higgsfield, Udio, Prodia, and Pika debut powerful new AI tools for video, music, and image generation.

Ashutosh Singh@The Tech Portal //
References: SiliconANGLE , THE DECODER , Maginative ...
Elon Musk's xAI has acquired Hotshot, a startup specializing in AI-powered video generation. Hotshot, founded by Aakash Sastry and John Mullan, has developed three video foundation models: Hotshot-XL, Hotshot Act One, and Hotshot. The move signals xAI's intention to enter the AI video generation market, potentially competing with OpenAI's Sora and Google's Veo 2.

The acquisition will see Hotshot's models scaled on xAI's supercomputer, Colossus, which utilizes a vast number of Nvidia chips. Hotshot trained its models on 600 million video clips, employing techniques like neural networks for automatic captioning and the bfloat16 data format to accelerate AI training. The company discontinued new video creation on March 14, 2025, and allowed existing users to download their content until March 30.

Recommended read:
References :
  • SiliconANGLE: XAI acquires AI video generation startup Hotshot
  • THE DECODER: Elon Musk's AI company xAI buys AI video generation startup Hotshot
  • The Tech Portal: Musk’s xAI acquires gen-AI video startup ‘Hotshot’ to compete with OpenAI’s Sora and Google’s Veo 2
  • Maginative: xAI Buys Hotshot, a Startup Working on AI-Generated Video

Emily Forlini@PCMag Middle East ai //
Google DeepMind has announced the pricing for its Veo 2 AI video generation model, making it available through its cloud API platform. The cost is set at $0.50 per second, which translates to $30 per minute or $1,800 per hour. While this may seem expensive, Google DeepMind researcher Jon Barron compared it to the cost of traditional filmmaking, noting that the blockbuster "Avengers: Endgame" cost around $32,000 per second to produce.

Veo 2 aims to create videos with realistic motion and high-quality output, up to 4K resolution, based on simple text prompts. While it's not the cheapest option compared to alternatives like OpenAI's Sora, which costs $200 per month, Google is targeting filmmakers and studios with larger budgets. The primary customers for Veo are filmmakers and studios, who typically have bigger budgets than film hobbyists. They would run Veo throughVertexAI, Google's platform for training and deploying advanced AI models."Veo 2 understands the unique language of cinematography: ask it for a genre, specify a lens, suggest cinematic effects and Veo 2 will deliver," Google says.

Recommended read:
References :
  • Shelly Palmer: Shelly Palmer discusses Google’s Veo 2, an AI video generator priced at 50 cents a second.
  • www.livescience.com: LiveScience reports Google's AI is now 'better than human gold medalists' at solving geometry problems.
  • PCMag Middle East ai: Google's Veo 2 Costs $1,800 Per Hour for AI-Generated Videos
  • THE DECODER: Google Deepmind sets pricing for Veo 2 AI video generation
  • Dataconomy: Google Veo 2 pricing: 50 cents per second of AI-generated video
  • TechCrunch: Reports Google’s new AI video model Veo 2 will cost 50 cents per second.

Emily Forlini@PCMag Middle East ai //
Google is enhancing its Workspace platform, with Google Drive now offering searchable video transcripts. This new feature, rolling out to all Google Workspace users by March 26th, allows users to access and search transcripts for videos stored in Drive. The transcripts appear in a sidebar next to the video player, highlighting the currently spoken text, making it easier to find specific moments. Users can enable transcripts by clicking the settings icon and selecting "Transcript," provided the video already has captions, indicated by the "CC" button.

Google DeepMind has announced pricing for its Veo 2 video generation model, setting the cost at $0.50 per second of generated video, translating to $30 per minute or $1,800 per hour. Veo 2 creates videos with realistic motion and high-quality output, up to 4K, from a simple text prompt. This pricing positions Veo 2 as a premium service targeted towards professionals and enterprises, offering a competitive alternative compared to conventional filmmaking methods, despite additional expenses like human labor.

Recommended read:
References :
  • Dataconomy: Google Veo 2 pricing: 50 cents per second of AI-generated video
  • TechCrunch: Google Drive users can now access and search transcripts for videos
  • The Verge: Google Drive gets searchable video transcripts
  • PCMag Middle East ai: Google's Veo 2 costs $1,800 per hour for AI-generated videos.
  • Shelly Palmer: Google’s Veo 2, the AI video generator unveiled in December, has a price tag that’s turning heads: 50 cents per second.
  • TechCrunch: Google has quietly revealed the pricing of Veo 2, the video-generating AI model that it unveiled in December.