@Google DeepMind Blog
//
Google is preparing to unveil significant AI advancements, with speculation pointing towards enhancements to its Gemini model. Rumors suggest a potential update to Gemini 2.0 Pro, possibly named "Nebula," which has been observed performing well on specific prompts. This new model is expected to incorporate advanced reasoning capabilities, adding a new layer of sophistication to Google's AI offerings.
Google's strategy involves integrating AI into various facets of its services, which is evident by the official rollout of its Data Science Agent to most Colab users for free. Gemini 2.0 is designed to be universally applied across Google's products. It will enhance AI Overviews in Google Search, which now serve one billion users, by making them more nuanced and complex. Additionally, live video and screen sharing are being rolled out to Gemini Live, improving the models features.
Recommended read:
References :
- Google DeepMind Blog: Google DeepMind introduced Gemini 1.5, a new model family boasting enhanced speed and efficiency for tasks such as real-time assistants and collaborations.
- www.tomsguide.com: Google unveiled Gemini 1.5, a new model family with enhanced capabilities, particularly in speed and context length.
- Maginative: Gemini App Gets a Major Upgrade: Canvas Mode, Audio Overviews, and More
- TestingCatalog: Discover Google Gemini's new Canvas and Audio Overview features, enhancing productivity in content creation and coding. Available globally for Gemini subscribers.
- Google Workspace Updates: Try Canvas, a new way to collaborate with the Gemini app
- www.techrepublic.com: Google boosts Gemini with Canvas and Audio Overview, offering real-time editing and podcast-style audio insights to power creative projects.
- AI & Machine Learning: Google's Gemini 1.5 models exhibited strong performance in chatbot capabilities, alongside generative AI innovations.
- AI Rabbit Blog: A news article describing how to use Google's Gemini AI to extract travel information from YouTube videos and generate routes and points of interest.
- Google DeepMind Blog: Today, we’re announcing Gemini 2.0, our most capable multimodal AI model yet.
- Windows Copilot News: This article discusses Google launching Gemini 2.0, its new AI model for practically everything.
- Windows Copilot News: Gemini AI can now summarize what’s in your Google Drive folders
- gHacks Technology News: Google has begun rolling out new features for its AI assistant, Gemini, enabling real-time interaction through live video and screen sharing.
- Google Workspace Updates: Get started with Gemini in Google Drive quickly with new “nudgesâ€
- Analytics India Magazine: Google is Rolling Out Live Video and Screen Sharing to Gemini Live
- LearnAI: Google’s Data Science Agent: Can It Really Do Your Job?
- TestingCatalog: Evidence mounts for Google to reveal a new Gemini model with agentic use case this week
- NextBigFuture.com: Google Gemini 2.5 Pro is the Top AI Model
- AI News | VentureBeat: Google releases ‘most intelligent model to date,’ Gemini 2.5 Pro
- Analytics Vidhya: We Tried the Google 2.5 Pro Experimental Model and It’s Mind-Blowing!
- www.tomsguide.com: Google unveils Gemini 2.5 — claims AI breakthrough with enhanced reasoning and multimodal power
- www.zdnet.com: Google releases 'most intelligent' experimental Gemini 2.5 Pro - here's how to try it
- The Official Google Blog: Google has released Gemini 2.5, our most intelligent AI model.
- MarkTechPost: Google AI Released Gemini 2.5 Pro Experimental: An Advanced AI Model that Excels in Reasoning, Coding, and Multimodal Capabilities
- Analytics India Magazine: Google's Gemini 2.5 Pro has demonstrated exceptional performance and capabilities in a wide range of tasks, positioning itself as a frontrunner in the AI landscape.
- Dataconomy: Google DeepMind unveiled Gemini 2.5 on March 25, 2025, calling it their most intelligent AI model yet.
- The Tech Basic: Google’s New AI Models “Think†Before Answering, Outperform Rivals
- The Verge: Google says its new ‘reasoning’ Gemini AI models are the best ones yet
- SiliconANGLE: Google introduces Gemini 2.5 Pro with chain-of-thought reasoning built-in.
- www.techradar.com: Google just announced Gemini 2.5 and it's the best AI reasoning model we've seen yet.
- Google DeepMind Blog: Gemini 2.5 is our most intelligent AI model, now with thinking built in.
- Shelly Palmer: Google unveiled Gemini 2.5 yesterday, marking their most significant advancement in AI reasoning models to date. The new family of AI models pauses to "think" before answering questions – a capability that puts Google in feature parity with OpenAI's "o" series, Deepseek's R series, Anthropic, xAI, and other reasoning models.
- THE DECODER: Gemini 2.5 Pro: Google has finally caught up
- TestingCatalog: Gemini 2.5 Pro sets new AI benchmark and launches on AI Studio and Gemini
- intelligence-artificielle.developpez.com: Google DeepMind a lancé Gemini 2.5 Pro, un modèle d'IA qui raisonne avant de répondre, affirmant qu'il est le meilleur sur plusieurs critères de référence en matière de raisonnement et de codage
- www.techrepublic.com: Google’s Gemini 2.5 Pro is Better at Coding, Math & Science Than Your Favourite AI Model
- Interconnects: The end of a busy spring of model improvements and what's next for the presumed leader in AI abilities.
- Maginative: The Gemini 2.5 Pro model, released recently by Google, has shown exceptional reasoning skills in various benchmarks.
- Ars OpenForum: Google says the new Gemini 2.5 Pro model is its “smartest� AI yet
- www.bitdegree.org: On March 25, Google Gemini 2.5 Pro, the newest version of its artificial intelligence (AI) model, a few months after Gemini 2.0.
- www.computerworld.com: Google is beating the drum for Gemini 2.5, a new AI model that reportedly offers better performance than similar reasoning models from competitors such as OpenAI, Anthropic and Deepseek.
- bdtechtalks.com: Gemini 2.5 Pro is a new reasoning model that excels in long-context tasks and benchmarks, revitalizing Google’s AI strategy against competitors like OpenAI.
@Latest from Tom's Guide
//
Google has unveiled Gemini 2.5 Pro, its latest and "most intelligent" AI model to date, showcasing significant advancements in reasoning, coding proficiency, and multimodal functionalities. According to Google, these improvements come from combining a significantly enhanced base model with improved post-training techniques. The model is designed to analyze complex information, incorporate contextual nuances, and draw logical conclusions with unprecedented accuracy. Gemini 2.5 Pro is now available for Gemini Advanced users and on Google's AI Studio.
Google emphasizes the model's "thinking" capabilities, achieved through chain-of-thought reasoning, which allows it to break down complex tasks into multiple steps and reason through them before responding. This new model can handle multimodal input from text, audio, images, videos, and large datasets. Additionally, Gemini 2.5 Pro exhibits strong performance in coding tasks, surpassing Gemini 2.0 in specific benchmarks and excelling at creating visually compelling web apps and agentic code applications. The model also achieved 18.8% on Humanity’s Last Exam, demonstrating its ability to handle complex knowledge-based questions.
Recommended read:
References :
- SiliconANGLE: Google LLC said today it’s updating its flagship Gemini artificial intelligence model family by introducing an experimental Gemini 2.5 Pro version.
- The Tech Basic: Google's New AI Models “Think” Before Answering, Outperform Rivals
- AI News | VentureBeat: Google releases ‘most intelligent model to date,’ Gemini 2.5 Pro
- Analytics Vidhya: We Tried the Google 2.5 Pro Experimental Model and It’s Mind-Blowing!
- www.tomsguide.com: Google unveils Gemini 2.5 — claims AI breakthrough with enhanced reasoning and multimodal power
- Google DeepMind Blog: Gemini 2.5: Our most intelligent AI model
- THE DECODER: Google Deepmind has introduced Gemini 2.5 Pro, which the company describes as its most capable AI model to date. The article appeared first on .
- intelligence-artificielle.developpez.com: Google DeepMind a lancé Gemini 2.5 Pro, un modèle d'IA qui raisonne avant de répondre, affirmant qu'il est le meilleur sur plusieurs critères de référence en matière de raisonnement et de codage
- The Tech Portal: Google unveils Gemini 2.5, its most intelligent AI model yet with ‘built-in thinking’
- Ars OpenForum: Google says the new Gemini 2.5 Pro model is its “smartest†AI yet
- The Official Google Blog: Gemini 2.5: Our most intelligent AI model
- www.techradar.com: I pitted Gemini 2.5 Pro against ChatGPT o3-mini to find out which AI reasoning model is best
- bsky.app: Google's AI comeback is official. Gemini 2.5 Pro Experimental leads in benchmarks for coding, math, science, writing, instruction following, and more, ahead of OpenAI's o3-mini, OpenAI's GPT-4.5, Anthropic's Claude 3.7, xAI's Grok 3, and DeepSeek's R1. The narrative has finally shifted.
- Shelly Palmer: Google’s Gemini 2.5: AI That Thinks Before It Speaks
- bdtechtalks.com: What to know about Google Gemini 2.5 Pro
- Interconnects: The end of a busy spring of model improvements and what's next for the presumed leader in AI abilities.
- www.techradar.com: Gemini 2.5 is now available for Advanced users and it seriously improves Google’s AI reasoning
- www.zdnet.com: Google releases 'most intelligent' experimental Gemini 2.5 Pro - here's how to try it
- Unite.AI: Gemini 2.5 Pro is Here—And it Changes the AI Game (Again)
- TestingCatalog: Gemini 2.5 Pro sets new AI benchmark and launches on AI Studio and Gemini
- Analytics Vidhya: Google DeepMind's latest AI model, Gemini 2.5 Pro, has reached the #1 position on the Arena leaderboard.
- AI News: Gemini 2.5: Google cooks up its ‘most intelligent’ AI model to date
- Fello AI: Google’s Gemini 2.5 Shocks the World: Crushing AI Benchmark Like No Other AI Model!
- Analytics India Magazine: Google Unveils Gemini 2.5, Crushes OpenAI GPT-4.5, DeepSeek R1, & Claude 3.7 Sonnet
- Practical Technology: Practical Tech covers the launch of Google's Gemini 2.5 Pro and its new AI benchmark achievements.
- Shelly Palmer: Google's Gemini 2.5: AI That Thinks Before It Speaks
- www.producthunt.com: Gemini 2.5
- Windows Copilot News: Google reveals AI ‘reasoning’ model that ‘explicitly shows its thoughts’
- AI News | VentureBeat: Hands on with Gemini 2. 5 Pro: why it might be the most useful reasoning model yet
- Composio: Notes on Gemini 2.5 Pro: A new coding SOTA
- www.computerworld.com: Gemini 2.5 can, among other things, analyze information, draw logical conclusions, take context into account, and make informed decisions.
- www.infoworld.com: Google introduces Gemini 2.5 reasoning models
- Maginative: Google's Gemini 2.5 Pro leads AI benchmarks with enhanced reasoning capabilities, positioning it ahead of competing models from OpenAI and others.
- www.infoq.com: Google's Gemini 2.5 Pro is a powerful new AI model that's quickly becoming a favorite among developers and researchers. It's capable of advanced reasoning and excels in complex tasks.
- AI News | VentureBeat: Google’s Gemini 2.5 Pro is the smartest model you’re not using – and 4 reasons it matters for enterprise AI
Tris Warkentin@The Official Google Blog
//
Google AI has released Gemma 3, a new family of open-source AI models designed for efficient and on-device AI applications. Gemma 3 models are built with technology similar to Gemini 2.0, intended to run efficiently on a single GPU or TPU. The models are available in various sizes: 1B, 4B, 12B, and 27B parameters, with options for both pre-trained and instruction-tuned variants, allowing users to select the model that best fits their hardware and specific application needs.
Gemma 3 offers practical advantages including efficiency and portability. For example, the 27B version has demonstrated robust performance in evaluations while still being capable of running on a single GPU. The 4B, 12B, and 27B models are capable of processing both text and images, and supports more than 140 languages. The models have a context window of 128,000 tokens, making them well suited for tasks that require processing large amounts of information. Google has built safety protocols into Gemma 3, including a safety checker for images called ShieldGemma 2.
Recommended read:
References :
- MarkTechPost: Google AI Releases Gemma 3: Lightweight Multimodal Open Models for Efficient and On‑Device AI
- The Official Google Blog: Introducing Gemma 3: The most capable model you can run on a single GPU or TPU
- AI News | VentureBeat: Google unveils open source Gemma 3 model with 128k context window
- AI News: Details on the launch of Gemma 3 open AI models by Google.
- The Verge: Google calls Gemma 3 the most powerful AI model you can run on one GPU
- Maginative: Google DeepMind’s Gemma 3 Brings Multimodal AI, 128K Context Window, and More
- TestingCatalog: Gemma 3 sets new benchmarks for open compact models with top score on LMarena
- AI & Machine Learning: Announcing Gemma 3 on Vertex AI
- Analytics Vidhya: Gemma 3 vs DeepSeek-R1: Is Google’s New 27B Model a Tough Competition to the 671B Giant?
- AI & Machine Learning: How to deploy serverless AI with Gemma 3 on Cloud Run
- The Tech Portal: Google rolls outs Gemma 3, its latest collection of lightweight AI models
- eWEEK: Google’s Gemma 3: Does the ‘World’s Best Single-Accelerator Model’ Outperform DeepSeek-V3?
- The Tech Basic: Gemma 3 by Google: Multilingual AI with Image and Video Analysis
- Analytics Vidhya: Google’s Gemma 3: Features, Benchmarks, Performance and Implementation
- www.infoworld.com: Google unveils Gemma 3 multi-modal AI models
- www.zdnet.com: Google claims Gemma 3 reaches 98% of DeepSeek's accuracy - using only one GPU
- AIwire: Google unveiled open source Gemma 3, is multimodal, comes in four sizes and can now handle more information and instructions thanks to a larger context window. The post appeared first on .
- Ars OpenForum: Google’s new Gemma 3 AI model is optimized to run on a single GPU
- THE DECODER: Google DeepMind has unveiled Gemma 3, a new generation of open AI models designed to deliver high performance with a relatively small footprint, making them suitable for running on individual GPUs or TPUs.
- Gradient Flow: Gemma 3: What You Need To Know
- Interconnects: Gemma 3, OLMo 2 32B, and the growing potential of open-source AI
- OODAloop: Gemma 3, Google's newest lightweight, open-source AI model, is designed for multimodal tasks and efficient deployment on various devices.
- NVIDIA Technical Blog: Google has released lightweight, multimodal, multilingual models called Gemma 3. The models are designed to run efficiently on phones and laptops.
- LessWrong: Google DeepMind has unveiled Gemma 3, a new generation of open AI models designed to deliver high performance with a relatively small footprint, making them suitable for running on individual GPUs or TPUs.
Matthias Bastian@THE DECODER
//
Google is enhancing its Gemini AI assistant with the ability to access users' Google Search history to deliver more personalized and relevant responses. This opt-in feature allows Gemini to analyze a user's search patterns and incorporate that information into its responses. The update is powered by the experimental Gemini 2.0 Flash Thinking model, which the company launched in late 2024.
This new capability, known as personalization, requires explicit user permission. Google is emphasizing transparency by allowing users to turn the feature on or off at any time, and Gemini will clearly indicate which data sources inform its personalized answers. To test the new feature Google suggests users ask about vacation spots, YouTube content ideas, or potential new hobbies. The system then draws on individual search histories to make tailored suggestions.
Recommended read:
References :
- Android Faithful: Google's AI tool Gemini gets a boost by working with deeper insight about you through personalization and app connections.
- Google DeepMind Blog: Experiment with Gemini 2.0 Flash native image generation
- THE DECODER: Google adds native image generation to Gemini language models
- THE DECODER: Google's Gemini AI assistant can now tap into users' search histories to provide more personalized responses, marking a significant expansion of the chatbot's capabilities.
- TestingCatalog: Discover the latest updates to Google's Gemini app, featuring the new 2.0 Flash Thinking model, enhanced personalization, and deeper integration with Google apps.
- The Official Google Blog: Gemini gets personal, with tailored help from your Google apps
- Search Engine Journal: Google Search History Can Now Power Gemini AI Answers
- www.zdnet.com: Gemini might soon have access to your Google Search history - if you let it
- The Official Google Blog: The Assistant experience on mobile is upgrading to Gemini
- www.zdnet.com: Google launches Gemini with Personalization, beating Apple to personal AI
- Maginative: Google to Replace Google Assistant with Gemini on Android Phones
- www.tomsguide.com: Google is giving away Gemini's best paid features for free — here's the tools you can try now
- MacSparky: This article reports on Google's integration of Gemini AI into its search engine and discusses the implications for users and creators.
- Search Engine Land: This change will roll out to most devices except Android 9 or earlier (and some other devices).
- www.zdnet.com: Gemini's new features are now available for free, extending beyond its previous paid subscriber model.
- www.techradar.com: Discusses how Google is giving Gemini a superpower by allowing it to access your Search history, raising excitement and concerns.
- PCMag Middle East ai: This article discusses Google's plan to replace Google Assistant with Gemini AI, highlighting the timeline for the transition and requirements for the devices.
- The Tech Basic: This article announces Google’s plan to replace Google Assistant with Gemini, focusing on the company’s focus on advancing AI and integrating Gemini into its mobile product ecosystem.
- Verdaily: Google Announces New Update for its AI Wizard, Gemini: Improves User Experience
- Windows Copilot News: Google is prepping Gemini to take action inside of apps
- www.techradar.com: Worried about DeepSeek? Well, Google Gemini collects even more of your personal data
- Maginative: Gemini App Gets a Major Upgrade: Canvas Mode, Audio Overviews, and More
- TestingCatalog: Google launches Canvas and Audio Overview for all Gemini users
- Android Faithful: Google Gemini Gets A Powerful Collaborative Upgrade: Canvas and Audio Overviews Now Available
Matthias Bastian@THE DECODER
//
Google has announced significant upgrades to its Gemini app, focusing on enhanced functionality, personalization, and accessibility. A key update is the rollout of the upgraded 2.0 Flash Thinking Experimental model, now supporting file uploads and boasting a 1 million token context window for processing large-scale information. This model aims to improve reasoning and response efficiency by breaking down prompts into actionable steps. The Deep Research feature, powered by Flash Thinking, allows users to create detailed multi-page reports with real-time insights into its reasoning process and is now available globally in over 45 languages, accessible for free or with expanded access for Gemini Advanced users.
Another major addition is the experimental "Personalization" feature, integrating Gemini with Google apps like Search to deliver tailored responses based on user activity. Gemini is also strengthening its integration with Google apps such as Calendar, Notes, Tasks, and Photos, enabling users to handle complex multi-app requests in a single prompt. Google is also putting Gemini 2.0 AI into robots through the DeepMind AI team, which has developed two new models of Gemini specifically designed to work with robots. The first, Gemini Robotics, is an advanced vision-language-action (VLA) LLM that uses physical motion to respond to prompts. The second model, Gemini Robots-ER, is a VLM with advanced spatial understanding, enabling robots to navigate changing environments. Google is partnering with robotics companies to further develop humanoid robots.
Google will replace its long-standing Google Assistant with Gemini on mobile devices later this year. The classic Google Assistant will no longer be accessible on most mobile devices, marking the end of an era. The shift represents Google's pivot toward generative AI, believing that Gemini's advanced AI capabilities will deliver a more powerful and versatile experience. Gemini will also come to tablets, cars, and connected devices like headphones and watches. The company also introduced Gemini Embedding, a novel embedding model initialized from the powerful Gemini Large Language Model, aiming to enhance embedding quality across diverse tasks.
Recommended read:
References :
- The Official Google Blog: Over the coming months, we’ll be upgrading users on mobile devices from Google Assistant to Gemini.
- Android Faithful: Google's AI tool Gemini gets a boost by working with deeper insight about you through personalization and app connections.
- Search Engine Journal: Google Gemini's integration of Search history blurs the line between traditional Search and AI assistants
- Maginative: Google will replace its long-standing Google Assistant with Gemini on mobile devices later this year, marking the end of an era for the company's original voice assistant.
- MarkTechPost: Google AI Introduces Gemini Embedding: A Novel Embedding Model Initialized from the Powerful Gemini Large Language Model
- www.tomsguide.com: Google is taking Gemini to the next level and giving users more with major upgrades aimed to make Gemini even more personal, plus many of the upgrades are free.
- PCMag Middle East ai: RIP Google Assistant? Gemini AI Poised to Replace It This Year
- The Tech Basic: Android’s New AI Era: Gemini Replaces Google Assistant This Year
- Search Engine Land: Google to replace Google Assistant with Gemini
- www.tomsguide.com: Google Assistant is losing features to make way for Gemini — here's what's just been axed
- The Official Google Blog: Gemini gets personal, with tailored help from your Google apps
- Analytics Vidhya: Google's Gemini models are undergoing significant updates, now featuring faster models, longer context lengths, and integrated AI agents.
- Google DeepMind Blog: Gemini breaks new ground: a faster model, longer context and AI agents
@Dataconomy
//
Google has enhanced the iOS experience by integrating Gemini AI with new lock screen widgets and control center access. iPhone users can now interact with Gemini directly from their lock screen, gaining quick access to Gemini Live and other tools without needing to unlock their devices. This update simplifies AI interactions on Apple's mobile platform, making it more accessible and convenient for users.
The new Gemini app widget allows instant access to the AI's voice chat feature, Gemini Live, by simply adding the widget to the lock screen and tapping it. Beyond voice chats, the update introduces three additional widgets: Camera Upload, allowing users to snap photos and send them to Gemini for analysis; Reminders & Calendar, for quickly setting events or tasks; and Text Chat, enabling immediate typed conversations. These widgets aim to streamline basic AI interactions, reducing the need to unlock the device.
Recommended read:
References :
- The Tech Basic: iPhone users no longer need to unlock their devices to chat with Google’s AI. The tech giant just released an update letting you access Gemini Live and other tools directly from your lock screen.
- PCMag Middle East ai: Gemini Will Soon Be Able to Answer Questions About What's on Your Screen
- TechCrunch: You can now talk to Google Gemini from your iPhone’s lock screen
- Digital Information World: New Update Makes Google Gemini Accessible Through the iPhone’s Lock Screen
- MacStories: Gemini for iOS Gets Lock Screen Widgets, Control Center Integration, Basic Shortcuts Actions
- 9to5Mac: Google announces ‘AI Mode’ as a new way to use Search, testing starts today
- TestingCatalog: Google unveils AI Mode in Search Labs, powered by Gemini 2.0
- www.tomsguide.com: Google launches 'AI Mode' for search — here's how to try it now
- TestingCatalog: Google plans to release new Gemini models on March 12
- Maginative: Details about Google introducing Gemini Embedding for enhanced AI.
- Windows Copilot News: Google is building smart home controls into Gemini
- Google Workspace Updates: Blog post about Gemini in the side panel of Workspace apps.
S.Dyema Zandria@The Tech Basic
//
Google is enhancing its Gemini AI with a new feature that allows users to create AI podcasts from research materials. This new capability, called Audio Overviews, converts research and study materials into engaging, podcast-style discussions featuring AI hosts. This aims to make learning and information consumption more accessible and enjoyable, particularly for educational purposes.
The Audio Overviews feature leverages Gemini's Deep Research capabilities. Users can input a topic, have Gemini generate a detailed report, and then convert that report into a conversational podcast with AI hosts. These hosts discuss the information in an approachable manner, similar to two friends exploring a topic. This tool is available to both free and paid Gemini Advanced users.
Recommended read:
References :
- The Tech Basic: Google created a system that enhances educational experiences by making study-related tasks more interesting. Gemini is an AI tool that converts dull projects and assignments into exciting podcast recordings.
- www.techrepublic.com: Google boosts Gemini with Canvas and Audio Overview, offering real-time editing and podcast-style audio insights to power creative projects.
- The Verge: Google will let you make AI podcasts from Gemini’s Deep Research. That means you can turn the in-depth reports generated by Gemini into a conversational podcast featuring two AI “hosts.
- Windows Copilot News: Google launched Gemini 2.0, its new AI model for practically everything
- Google Workspace Updates: Provides a recap of Google Workspace Updates for the week of March 21, 2025, highlighting AI-powered features.
- Stuff South Africa: New Gemini update allows the AI assistant to see through your screen and camera
Emily Forlini@PCMag Middle East ai
//
Google DeepMind has announced the pricing for its Veo 2 AI video generation model, making it available through its cloud API platform. The cost is set at $0.50 per second, which translates to $30 per minute or $1,800 per hour. While this may seem expensive, Google DeepMind researcher Jon Barron compared it to the cost of traditional filmmaking, noting that the blockbuster "Avengers: Endgame" cost around $32,000 per second to produce.
Veo 2 aims to create videos with realistic motion and high-quality output, up to 4K resolution, based on simple text prompts. While it's not the cheapest option compared to alternatives like OpenAI's Sora, which costs $200 per month, Google is targeting filmmakers and studios with larger budgets. The primary customers for Veo are filmmakers and studios, who typically have bigger budgets than film hobbyists. They would run Veo throughVertexAI, Google's platform for training and deploying advanced AI models."Veo 2 understands the unique language of cinematography: ask it for a genre, specify a lens, suggest cinematic effects and Veo 2 will deliver," Google says.
Recommended read:
References :
- Shelly Palmer: Shelly Palmer discusses Google’s Veo 2, an AI video generator priced at 50 cents a second.
- www.livescience.com: LiveScience reports Google's AI is now 'better than human gold medalists' at solving geometry problems.
- PCMag Middle East ai: Google's Veo 2 Costs $1,800 Per Hour for AI-Generated Videos
- THE DECODER: Google Deepmind sets pricing for Veo 2 AI video generation
- Dataconomy: Google Veo 2 pricing: 50 cents per second of AI-generated video
- TechCrunch: Reports Google’s new AI video model Veo 2 will cost 50 cents per second.
Ryan Daws@AI News
//
OpenAI and Google are urging the US government to take decisive action to secure the nation's leadership in Artificial Intelligence. In letters to the Office of Science and Technology Policy, both companies emphasized the importance of maintaining America's lead in AI, especially as competitors like China rapidly advance. OpenAI highlighted the potential of AI to drive productivity and likened its advancements to historical leaps in innovation, advocating for open access while safeguarding against autocratic control.
They warned that America's technological lead in AI is "not wide and is narrowing". The recent submissions from March 2025 highlight urgent concerns about national security risks, economic competitiveness, and the need for strategic regulatory frameworks to maintain US leadership in AI development amid growing global competition. The emergence of China's Deepseek R1 model has triggered significant concern among major US AI developers, who view it as compelling evidence that the technological gap is quickly closing.
Recommended read:
References :
- Policy ? Ars Technica: Google joins OpenAI in pushing feds to codify AI training as fair use
- bsky.app: Google has joined OpenAI in asking the U.S. government to codify the right to train AI models on publicly available data, including copyrighted data, without restriction. Google argues that “fair use and text-and-data mining exceptions� are “critical� to AI development and innovation.
- chatgptiseatingtheworld.com: Google posted its comment to the White House Office of Science & Technology Policy’s request. Like OpenAI, Google stressed the importance of fair use to AI development, although its section was shorter than in OpenAI’s comment. Copyright. Balanced copyright rules, such as fair use and text-and-data miningexceptions, have been critical to enabling AI systems to
- AI News: OpenAI and Google call for US government action to secure AI lead
- Unite.AI: OpenAI, Anthropic, and Google Urge Action as US AI Lead Diminishes
- Maginative: OpenAI Pushes for ‘Freedom to Innovate’ in U.S. AI Action Plan
- The Verge: OpenAI and Google ask the government to let them train AI on content they don’t own
Matt G.@Search Engine Journal
//
Google is expanding its AI initiatives in both search functionality and health-related technology. AI Overviews are now available for thousands more health topics in multiple languages, including Spanish, Portuguese, and Japanese. This expansion uses Google's AI and ranking systems, aiming to provide more relevant and comprehensive information that meets high standards for clinical accuracy. These advancements incorporate health-focused refinements to Google's Gemini models, improving the summarization of health-related content.
New features like "What People Suggest" have been introduced to offer users insights from online discussions and the experiences of others with similar health conditions. This feature, currently available on mobile in the U.S., uses AI to analyze perspectives from online forums, aiding users in discovering helpful advice. In addition to the AI enhancements, Google has launched the sleek new Pixel 9a smartphone with improved battery life, available for $499. The Pixel 9a comes in four colors: obsidian, porcelain, peony and iris.
Recommended read:
Evelyn Blake@The Tech Basic
//
Google has begun rolling out real-time interaction features to its AI assistant, Gemini, enabling live video and screen sharing. These enhancements, powered by Project Astra, allow users to engage more intuitively with their devices, marking a significant advancement in AI-assisted technology. These features are available to Google One AI Premium subscribers.
The new live video feature allows users to utilize their smartphone cameras to engage in real-time visual interactions with Gemini, enabling the AI to answer questions about what it observes. Gemini can analyze a user’s phone screen or camera feed in real-time and instantly answer questions. The screen-sharing feature enables the AI to analyze and provide insights on the displayed content, useful for navigating complex applications or troubleshooting issues. Google plans to expand access to more users soon.
Recommended read:
References :
- The Tech Basic: Google has started releasing new AI tools for Gemini that let the assistant analyze your phone screen or camera feed in real time.
- gHacks Technology News: Google has begun rolling out new features for its AI assistant, Gemini, enabling real-time interaction through live video and screen sharing.
- The Verge: Google is rolling out Gemini’s real-time AI video features
Soumyadeep Sarkar@The Tech Portal
//
Google has announced new AI-powered travel planning tools designed to simplify summer vacation planning. These updates span across Google Search and Maps, incorporating features like expanded itinerary suggestions, price tracking for hotels, and the ability to organize travel ideas from screenshots. The goal is to leverage AI to provide better recommendations and help users find deals, ultimately saving time and reducing the stress associated with planning a trip.
The new features include AI-generated travel recommendations for entire countries and regions, not just cities. For example, users can request itineraries with specific focuses, like a hiking trip in Costa Rica. Google is also extending price tracking to hotels, allowing users to receive email alerts when prices drop for desired locations and dates. A new feature in Google Maps uses Gemini AI to identify places from screenshots and add them to dedicated travel lists, initially launching on iOS in the U.S. with Android support coming later.
Recommended read:
Chris McKay@Maginative
//
Google is enhancing its NotebookLM tool with interactive mind maps, a feature designed to help users visualize and navigate complex information from uploaded sources. These mind maps present document topics as branching diagrams, allowing users to explore connections and ask questions about specific areas by clicking on nodes. This visual approach aims to transform how users interact with their content, moving beyond linear reading to a more intuitive exploration of interconnected concepts.
LlamaIndex, a framework for building knowledge-driven AI agents, has also been integrated with Google Cloud's Gen AI Toolbox for Databases. This integration empowers developers to construct sophisticated AI agents with customizable workflows. LlamaIndex offers pre-built agent architectures for common use cases, along with tools to tailor the behavior of AI agents to specific requirements, which will benefit those using Gen AI Toolbox for Databases.
Recommended read:
References :
- AI & Machine Learning: Gen AI Toolbox for Databases announces LlamaIndex integration
- Maginative: Google Adds Interactive Mind Maps to NotebookLM
- www.techradar.com: Google’s NotebookLM adds Mind Maps to its string of research tools to help you learn faster than ever
Ellie Ramirez-Camara@Data Phoenix
//
Google has made several significant announcements regarding its AI and search capabilities. The company has launched an "AI mode" for Search, an experimental feature designed to handle complex queries that would typically require multiple traditional web searches. This new mode leverages AI to provide more advanced reasoning, thinking, and multimodal capabilities, allowing users to ask intricate questions and receive unified, ordinary language responses. Alongside this, Google has expanded access to its AI Overviews, now powered by Gemini 2.0, indicating a broader integration of AI into its search functionalities.
Google has also released Gemma 3, the latest iteration of its open AI models, aimed at improving AI accessibility. Built on the foundation of Gemini 2.0, Gemma 3 is engineered to be lightweight, portable, and adaptable, enabling developers to create AI applications across a wide range of devices. The models are available in various sizes, catering to different hardware and performance needs. Moreover, the Justice Department has ended its attempt to force Google to sell off its stakes in Anthropic, signaling a shift in the legal landscape surrounding Google's AI investments.
Recommended read:
References :
- Data Phoenix: Google's new 'AI mode' adds more AI to Search and enables users to ask complex questions
- GZERO Media: The Justice Department ends its attempt to make Google sell its AI
|
|