News from the AI & ML world

DeeperML - #googlegemini

Carl Franzen@AI News | VentureBeat //
Google is enhancing Android development with its Gemini AI model, launching Gemini in Android Studio for Businesses to streamline the design of work applications. This new offering is a subscription-based service that aims to meet the growing demand for secure, privacy-conscious, and customizable AI integration within large organizations and development teams. By leveraging Gemini, Android developers can now more easily create workplace apps within the Android ecosystem, with enhanced features tailored for managing sensitive codebases and workflows. This move brings AI-assisted coding into enterprise-grade environments without compromising data governance or intellectual property protection.

Visual AI in Gemini Live is also bringing AI-powered vision to devices like the Samsung Galaxy S25. The upgrade allows users to grant Gemini Live access to their camera and screen sharing, enabling the AI to provide real-time conversational interactions about what it sees. Samsung states the new upgrade to Gemini Live means the AI can 'have a real-time conversation with users about what it sees – making everyday tasks easier.' For Galaxy S25 users, this update is already rolling out as a free upgrade, demonstrating the deepening partnership between Google and Samsung in the AI space.

In addition to benefiting developers and end users, Gemini is also being integrated into other Google services, such as Google Chat. Gemini in Google Chat can now help users catch up on unread conversations with summaries, even extending this ability to direct messages and read conversations. This functionality, already available, has also been expanded to include three additional languages: Spanish, Portuguese, and German. These enhancements across different platforms show Google's commitment to leveraging AI to improve productivity and user experience across its suite of products.

Recommended read:
References :
  • AI News | VentureBeat: Google launches Gemini in Android Studio for Businesses, making it easier for devs to design work apps
  • www.techradar.com: Your Samsung Galaxy S25 just got a huge free Gemini upgrade that gives your AI assistant eyes
  • www.tomsguide.com: Google Gemini Live brings AI-powered vision to Galaxy S25 and Pixel 9 — here's how it works
  • www.eweek.com: Samsung’s Galaxy S25 Now Talks to You — And Sees What You See — Thanks to Real-Time AI
  • Android Developers Blog: Gemini in Android Studio for businesses: Develop with confidence, powered by AI
  • Developer Tech News: Google enhances Android Studio with enterprise Gemini AI tools
  • www.developer-tech.com: Google enhances Android Studio with enterprise Gemini AI tools
  • cloud.google.com: Delivers an application-centric, AI-powered cloud for developers and operators.

@www.datasciencecentral.com //
Google is significantly enhancing user interface (UI) design through AI-driven personalization, moving away from static interfaces to adaptive experiences tailored for individual users. This personalization leverages machine learning algorithms, behavioral analytics, and real-time data processing to modify UI elements dynamically. AI systems gather data from various sources like browsing history, purchase behavior, and device interactions, which are then cleaned, structured, and anonymized to maintain privacy. These models analyze user behavior and segment users into distinct profiles based on their preferences and interaction history, leading to real-time content adaptation through reinforcement learning.

The Samsung Galaxy S25 series is receiving a significant AI upgrade in the form of Gemini Live with Visual AI. This update, powered by Google Gemini, enables S25 users to have real-time visual and conversational interactions with their AI assistant. Visual AI allows users to grant Gemini Live access to their camera and screen sharing, enabling the AI to identify and provide information about what the user is seeing. This feature aims to make everyday tasks easier through real-time conversations about the user's surroundings.

Google is rolling out Gemini Live Video and screensharing features, initially available on the Google Pixel 9, Samsung Galaxy S25, and for Gemini Advanced subscribers. Gemini Live Video allows users to point their phone's camera at objects in the real world and ask questions to learn more about them, similar to Apple's Visual Intelligence. Screensharing enables users to share their phone screen with Gemini and ask questions about the content displayed on websites or apps. Other Android phones are expected to receive these features later this month, but they will be exclusive to Gemini Advanced subscribers, requiring a $20 monthly subscription.

Recommended read:
References :
  • Data Science Central: How AI-driven personalization is transforming user interface design
  • www.techradar.com: Your Samsung Galaxy S25 just got a huge free Gemini upgrade that gives your AI assistant eyes
  • www.tomsguide.com: Google Gemini Live brings AI-powered vision to Galaxy S25 and Pixel 9 — here's how it works

@simonwillison.net //
Google has broadened access to its advanced AI model, Gemini 2.5 Pro, showcasing impressive capabilities and competitive pricing designed to challenge rival models like OpenAI's GPT-4o and Anthropic's Claude 3.7 Sonnet. Google's latest flagship model is currently recognized as a top performer, excelling in Optical Character Recognition (OCR), audio transcription, and long-context coding tasks. Alphabet CEO Sundar Pichai highlighted Gemini 2.5 Pro as Google's "most intelligent model + now our most in demand." Demand has increased by over 80 percent this month alone across both Google AI Studio and the Gemini API.

Google's expansion includes a tiered pricing structure for the Gemini 2.5 Pro API, offering a more affordable option compared to competitors. Prompts with less than 200,000 tokens are priced at $1.25 per million for input and $10 per million for output, while larger prompts increase to $2.50 and $15 per million tokens, respectively. Although prompt caching is not yet available, its future implementation could potentially lower costs further. The free tier allows 500 free grounding queries with Google Search per day, with an additional 1,500 free queries in the paid tier, with costs per 1,000 queries set at $35 beyond that.

The AI research group EpochAI reported that Gemini 2.5 Pro scored 84% on the GPQA Diamond benchmark, surpassing the typical 70% score of human experts. This benchmark assesses challenging multiple-choice questions in biology, chemistry, and physics, validating Google's benchmark results. The model is now available as a paid model, along with a free tier option. The free tier can use data to improve Google's products while the paid tier cannot. Rates vary by tier and range from 150-2,000/minute. Google will retire the Gemini 2.0 Pro preview entirely in favor of 2.5.

Recommended read:
References :
  • Data Phoenix: Google Unveils Gemini 2.5: Its Most Intelligent AI Model Yet
  • AI News | VentureBeat: Gemini 2.5 Pro is now available without limits and for cheaper than Claude, GPT-4o
  • Simon Willison's Weblog: Google's Gemini 2.5 Pro is currently the top model and, from , a superb model for OCR, audio transcription and long-context coding. You can now pay for it! The new gemini-2.5-pro-preview-03-25 model ID is priced like this: Prompts less than 200,00 tokens: $1.25/million tokens for input, $10/million for output Prompts more than 200,000 tokens (up to the 1,048,576 max): $2.50/million for input, $15/million for output This is priced at around the same level as Gemini 1.5 Pro ($1.25/$5 for input/output below 128,000 tokens, $2.50/$10 above 128,000 tokens), is cheaper than GPT-4o for shorter prompts ($2.50/$10) and is cheaper than Claude 3.7 Sonnet ($3/$15). Gemini 2.5 Pro is a reasoning model, and invisible reasoning tokens are included in the output token count. I just tried prompting "hi" and it charged me 2 tokens for input and 623 for output, of which 613 were "thinking" tokens. That still adds up to just 0.6232 cents (less than a cent) using my which I updated to support the new model just now. I released this morning adding support for the new model: llm install -U llm-gemini llm -m gemini-2.5-pro-preview-03-25 hi Note that the model continues to be available for free under the previous gemini-2.5-pro-exp-03-25 model ID: llm -m gemini-2.5-pro-exp-03-25 hi The free tier is "used to improve our products", the paid tier is not. Rate limits for the paid model - from 150/minute and 1,000/day for tier 1 (billing configured), 1,000/minute and 50,000/day for Tier 2 ($250 total spend) and 2,000/minute and unlimited/day for Tier 3 ($1,000 total spend). Meanwhile the free tier continues to limit you to 5 requests per minute and 25 per day. Google are entirely in favour of 2.5. Via Tags: , , , , , , ,
  • THE DECODER: Google has opened broader access to Gemini 2.5 Pro, its latest AI flagship model, which demonstrates impressive performance in scientific testing while introducing competitive pricing.
  • Bernard Marr: Google's latest AI model, Gemini 2.5 Pro, is poised to streamline complex mathematical and coding operations.
  • The Cognitive Revolution: In this illuminating episode of The Cognitive Revolution, host Nathan Labenz speaks with Jack Rae, principal research scientist at Google DeepMind and technical lead on Google's thinking and inference time scaling work.
  • bsky.app: Gemini 2. 5 Pro pricing was announced today - it's cheaper than both GPT-4o and Claude 3.7 Sonnet I've updated my llm-gemini plugin to add support for the new paid model Full notes here:
  • Last Week in AI: Google unveils a next-gen AI reasoning model, OpenAI rolls out image generation powered by GPT-4o to ChatGPT, Tencent’s Hunyuan T1 AI reasoning model rivals DeepSeek in performance and price

Maximilian Schreiner@THE DECODER //
Google has unveiled Gemini 2.5 Pro, its latest and "most intelligent" AI model to date, showcasing significant advancements in reasoning, coding proficiency, and multimodal functionalities. According to Google, these improvements come from combining a significantly enhanced base model with improved post-training techniques. The model is designed to analyze complex information, incorporate contextual nuances, and draw logical conclusions with unprecedented accuracy. Gemini 2.5 Pro is now available for Gemini Advanced users and on Google's AI Studio.

Google emphasizes the model's "thinking" capabilities, achieved through chain-of-thought reasoning, which allows it to break down complex tasks into multiple steps and reason through them before responding. This new model can handle multimodal input from text, audio, images, videos, and large datasets. Additionally, Gemini 2.5 Pro exhibits strong performance in coding tasks, surpassing Gemini 2.0 in specific benchmarks and excelling at creating visually compelling web apps and agentic code applications. The model also achieved 18.8% on Humanity’s Last Exam, demonstrating its ability to handle complex knowledge-based questions.

Recommended read:
References :
  • SiliconANGLE: Google LLC said today it’s updating its flagship Gemini artificial intelligence model family by introducing an experimental Gemini 2.5 Pro version.
  • The Tech Basic: Google's New AI Models “Think” Before Answering, Outperform Rivals
  • AI News | VentureBeat: Google releases ‘most intelligent model to date,’ Gemini 2.5 Pro
  • Analytics Vidhya: We Tried the Google 2.5 Pro Experimental Model and It’s Mind-Blowing!
  • www.tomsguide.com: Google unveils Gemini 2.5 — claims AI breakthrough with enhanced reasoning and multimodal power
  • Google DeepMind Blog: Gemini 2.5: Our most intelligent AI model
  • THE DECODER: Google Deepmind has introduced Gemini 2.5 Pro, which the company describes as its most capable AI model to date. The article appeared first on .
  • intelligence-artificielle.developpez.com: Google DeepMind a lancé Gemini 2.5 Pro, un modèle d'IA qui raisonne avant de répondre, affirmant qu'il est le meilleur sur plusieurs critères de référence en matière de raisonnement et de codage
  • The Tech Portal: Google unveils Gemini 2.5, its most intelligent AI model yet with ‘built-in thinking’
  • Ars OpenForum: Google says the new Gemini 2.5 Pro model is its “smartest†AI yet
  • The Official Google Blog: Gemini 2.5: Our most intelligent AI model
  • www.techradar.com: I pitted Gemini 2.5 Pro against ChatGPT o3-mini to find out which AI reasoning model is best
  • bsky.app: Google's AI comeback is official. Gemini 2.5 Pro Experimental leads in benchmarks for coding, math, science, writing, instruction following, and more, ahead of OpenAI's o3-mini, OpenAI's GPT-4.5, Anthropic's Claude 3.7, xAI's Grok 3, and DeepSeek's R1. The narrative has finally shifted.
  • Shelly Palmer: Google’s Gemini 2.5: AI That Thinks Before It Speaks
  • bdtechtalks.com: Gemini 2.5 Pro is a new reasoning model that excels in long-context tasks and benchmarks, revitalizing Google’s AI strategy against competitors like OpenAI.
  • Interconnects: The end of a busy spring of model improvements and what's next for the presumed leader in AI abilities.
  • www.techradar.com: Gemini 2.5 is now available for Advanced users and it seriously improves Google’s AI reasoning
  • www.zdnet.com: Google releases 'most intelligent' experimental Gemini 2.5 Pro - here's how to try it
  • Unite.AI: Gemini 2.5 Pro is Here—And it Changes the AI Game (Again)
  • TestingCatalog: Gemini 2.5 Pro sets new AI benchmark and launches on AI Studio and Gemini
  • Analytics Vidhya: Google DeepMind's latest AI model, Gemini 2.5 Pro, has reached the #1 position on the Arena leaderboard.
  • AI News: Gemini 2.5: Google cooks up its ‘most intelligent’ AI model to date
  • Fello AI: Google’s Gemini 2.5 Shocks the World: Crushing AI Benchmark Like No Other AI Model!
  • Analytics India Magazine: Google Unveils Gemini 2.5, Crushes OpenAI GPT-4.5, DeepSeek R1, & Claude 3.7 Sonnet
  • Practical Technology: Practical Tech covers the launch of Google's Gemini 2.5 Pro and its new AI benchmark achievements.
  • Shelly Palmer: Google's Gemini 2.5: AI That Thinks Before It Speaks
  • www.producthunt.com: Google's most intelligent AI model
  • Windows Copilot News: Google reveals AI ‘reasoning’ model that ‘explicitly shows its thoughts’
  • AI News | VentureBeat: Hands on with Gemini 2.5 Pro: why it might be the most useful reasoning model yet
  • thezvi.wordpress.com: Gemini 2.5 Pro Experimental is America’s next top large language model. That doesn’t mean it is the best model for everything. In particular, it’s still Gemini, so it still is a proud member of the Fun Police, in terms of …
  • www.computerworld.com: Gemini 2.5 can, among other things, analyze information, draw logical conclusions, take context into account, and make informed decisions.
  • www.infoworld.com: Google introduces Gemini 2.5 reasoning models
  • Maginative: Google's Gemini 2.5 Pro leads AI benchmarks with enhanced reasoning capabilities, positioning it ahead of competing models from OpenAI and others.
  • www.infoq.com: Google's Gemini 2.5 Pro is a powerful new AI model that's quickly becoming a favorite among developers and researchers. It's capable of advanced reasoning and excels in complex tasks.
  • AI News | VentureBeat: Google’s Gemini 2.5 Pro is the smartest model you’re not using – and 4 reasons it matters for enterprise AI
  • Communications of the ACM: Google has released Gemini 2.5 Pro, an updated AI model focused on enhanced reasoning, code generation, and multimodal processing.
  • The Next Web: Google has released Gemini 2.5 Pro, an updated AI model focused on enhanced reasoning, code generation, and multimodal processing.
  • www.tomsguide.com: Gemini 2.5 Pro is now free to all users in surprise move
  • Composio: Google just launched Gemini 2.5 Pro on March 26th, claiming to be the best in coding, reasoning and overall everything. But I The post appeared first on .
  • Composio: Google's Gemini 2.5 Pro, released on March 26th, is being hailed for its enhanced reasoning, coding, and multimodal capabilities.
  • Analytics India Magazine: Gemini 2.5 Pro is better than the Claude 3.7 Sonnet for coding in the Aider Polyglot leaderboard.
  • www.zdnet.com: Gemini's latest model outperforms OpenAI's o3 mini and Anthropic's Claude 3.7 Sonnet on the latest benchmarks. Here's how to try it.
  • www.marketingaiinstitute.com: [The AI Show Episode 142]: ChatGPT’s New Image Generator, Studio Ghibli Craze and Backlash, Gemini 2.5, OpenAI Academy, 4o Updates, Vibe Marketing & xAI Acquires X
  • www.tomsguide.com: Gemini 2.5 is free, but can it beat DeepSeek?
  • www.tomsguide.com: Google Gemini could soon help your kids with their homework — here’s what we know
  • PCWorld: Google’s latest Gemini 2.5 Pro AI model is now free for all users
  • www.techradar.com: Google just made Gemini 2.5 Pro Experimental free for everyone, and that's awesome.
  • Last Week in AI: #205 - Gemini 2.5, ChatGPT Image Gen, Thoughts of LLMs

Evelyn Blake@The Tech Basic //
Google has begun rolling out real-time interaction features to its AI assistant, Gemini, enabling live video and screen sharing. These enhancements, powered by Project Astra, allow users to engage more intuitively with their devices, marking a significant advancement in AI-assisted technology. These features are available to Google One AI Premium subscribers.

The new live video feature allows users to utilize their smartphone cameras to engage in real-time visual interactions with Gemini, enabling the AI to answer questions about what it observes. Gemini can analyze a user’s phone screen or camera feed in real-time and instantly answer questions. The screen-sharing feature enables the AI to analyze and provide insights on the displayed content, useful for navigating complex applications or troubleshooting issues. Google plans to expand access to more users soon.

Recommended read:
References :
  • The Tech Basic: Google has started releasing new AI tools for Gemini that let the assistant analyze your phone screen or camera feed in real time.
  • gHacks Technology News: Google has begun rolling out new features for its AI assistant, Gemini, enabling real-time interaction through live video and screen sharing.
  • The Verge: Google is rolling out Gemini’s real-time AI video features

Evelyn Blake@The Tech Basic //
Google has started rolling out new AI tools for Gemini, allowing the assistant to analyze your phone screen or camera feed in real time. These features are powered by Project Astra and are available to Google One AI Premium subscribers. The update transforms Gemini into a visual helper, enabling users to point their camera at an object and receive descriptions or suggestions from the AI.

These features are part of Google's Project Astra initiative, which aims to enhance AI's ability to understand and interact with the real world in real-time. Gemini can now analyze your screen in real-time through a "Share screen with Live" button and analyze your phone's camera. Early adopters have tested the screen-reading tool, and Google plans to expand access to more users soon. With Gemini's live video and screen sharing functionalities, Google is positioning itself ahead in the competitive landscape of AI assistants.

Recommended read:
References :
  • The Tech Basic: Google has started releasing new AI tools for Gemini that let the assistant analyze your phone screen or camera feed in real time.
  • gHacks Technology News: Google rolls out Project Astra-powered features in Gemini AI
  • www.techradar.com: Gemini can now see your screen and judge your tabs

Matthias Bastian@THE DECODER //
Google has announced significant upgrades to its Gemini app, focusing on enhanced functionality, personalization, and accessibility. A key update is the rollout of the upgraded 2.0 Flash Thinking Experimental model, now supporting file uploads and boasting a 1 million token context window for processing large-scale information. This model aims to improve reasoning and response efficiency by breaking down prompts into actionable steps. The Deep Research feature, powered by Flash Thinking, allows users to create detailed multi-page reports with real-time insights into its reasoning process and is now available globally in over 45 languages, accessible for free or with expanded access for Gemini Advanced users.

Another major addition is the experimental "Personalization" feature, integrating Gemini with Google apps like Search to deliver tailored responses based on user activity. Gemini is also strengthening its integration with Google apps such as Calendar, Notes, Tasks, and Photos, enabling users to handle complex multi-app requests in a single prompt. Google is also putting Gemini 2.0 AI into robots through the DeepMind AI team, which has developed two new models of Gemini specifically designed to work with robots. The first, Gemini Robotics, is an advanced vision-language-action (VLA) LLM that uses physical motion to respond to prompts. The second model, Gemini Robots-ER, is a VLM with advanced spatial understanding, enabling robots to navigate changing environments. Google is partnering with robotics companies to further develop humanoid robots.

Google will replace its long-standing Google Assistant with Gemini on mobile devices later this year. The classic Google Assistant will no longer be accessible on most mobile devices, marking the end of an era. The shift represents Google's pivot toward generative AI, believing that Gemini's advanced AI capabilities will deliver a more powerful and versatile experience. Gemini will also come to tablets, cars, and connected devices like headphones and watches. The company also introduced Gemini Embedding, a novel embedding model initialized from the powerful Gemini Large Language Model, aiming to enhance embedding quality across diverse tasks.

Recommended read:
References :
  • The Official Google Blog: Over the coming months, we’ll be upgrading users on mobile devices from Google Assistant to Gemini.
  • Android Faithful: Google's AI tool Gemini gets a boost by working with deeper insight about you through personalization and app connections.
  • Search Engine Journal: Google Gemini's integration of Search history blurs the line between traditional Search and AI assistants
  • Maginative: Google will replace its long-standing Google Assistant with Gemini on mobile devices later this year, marking the end of an era for the company's original voice assistant.
  • MarkTechPost: Google AI Introduces Gemini Embedding: A Novel Embedding Model Initialized from the Powerful Gemini Large Language Model
  • www.tomsguide.com: Google is taking Gemini to the next level and giving users more with major upgrades aimed to make Gemini even more personal, plus many of the upgrades are free.
  • PCMag Middle East ai: RIP Google Assistant? Gemini AI Poised to Replace It This Year
  • The Tech Basic: Android’s New AI Era: Gemini Replaces Google Assistant This Year
  • Search Engine Land: Google to replace Google Assistant with Gemini
  • www.tomsguide.com: Google Assistant is losing features to make way for Gemini — here's what's just been axed
  • The Official Google Blog: Gemini gets personal, with tailored help from your Google apps
  • Analytics Vidhya: Google's Gemini models are undergoing significant updates, now featuring faster models, longer context lengths, and integrated AI agents.
  • Google DeepMind Blog: Gemini breaks new ground: a faster model, longer context and AI agents

Matthias Bastian@THE DECODER //
Google is enhancing its Gemini AI assistant with the ability to access users' Google Search history to deliver more personalized and relevant responses. This opt-in feature allows Gemini to analyze a user's search patterns and incorporate that information into its responses. The update is powered by the experimental Gemini 2.0 Flash Thinking model, which the company launched in late 2024.

This new capability, known as personalization, requires explicit user permission. Google is emphasizing transparency by allowing users to turn the feature on or off at any time, and Gemini will clearly indicate which data sources inform its personalized answers. To test the new feature Google suggests users ask about vacation spots, YouTube content ideas, or potential new hobbies. The system then draws on individual search histories to make tailored suggestions.

Recommended read:
References :
  • Android Faithful: Google's AI tool Gemini gets a boost by working with deeper insight about you through personalization and app connections.
  • Google DeepMind Blog: Experiment with Gemini 2.0 Flash native image generation
  • THE DECODER: Google adds native image generation to Gemini language models
  • THE DECODER: Google's Gemini AI assistant can now tap into users' search histories to provide more personalized responses, marking a significant expansion of the chatbot's capabilities.
  • TestingCatalog: Discover the latest updates to Google's Gemini app, featuring the new 2.0 Flash Thinking model, enhanced personalization, and deeper integration with Google apps.
  • The Official Google Blog: Gemini gets personal, with tailored help from your Google apps
  • Search Engine Journal: Google Search History Can Now Power Gemini AI Answers
  • www.zdnet.com: Gemini might soon have access to your Google Search history - if you let it
  • The Official Google Blog: The Assistant experience on mobile is upgrading to Gemini
  • www.zdnet.com: Google launches Gemini with Personalization, beating Apple to personal AI
  • Maginative: Google to Replace Google Assistant with Gemini on Android Phones
  • www.tomsguide.com: Google is giving away Gemini's best paid features for free — here's the tools you can try now
  • MacSparky: This article reports on Google's integration of Gemini AI into its search engine and discusses the implications for users and creators.
  • Search Engine Land: This change will roll out to most devices except Android 9 or earlier (and some other devices).
  • www.zdnet.com: Gemini's new features are now available for free, extending beyond its previous paid subscriber model.
  • www.techradar.com: Discusses how Google is giving Gemini a superpower by allowing it to access your Search history, raising excitement and concerns.
  • PCMag Middle East ai: This article discusses Google's plan to replace Google Assistant with Gemini AI, highlighting the timeline for the transition and requirements for the devices.
  • The Tech Basic: This article announces Google’s plan to replace Google Assistant with Gemini, focusing on the company’s focus on advancing AI and integrating Gemini into its mobile product ecosystem.
  • Verdaily: Google Announces New Update for its AI Wizard, Gemini: Improves User Experience
  • Windows Copilot News: Google is prepping Gemini to take action inside of apps
  • www.techradar.com: Worried about DeepSeek? Well, Google Gemini collects even more of your personal data
  • Maginative: Gemini App Gets a Major Upgrade: Canvas Mode, Audio Overviews, and More
  • TestingCatalog: Google launches Canvas and Audio Overview for all Gemini users
  • Android Faithful: Google Gemini Gets A Powerful Collaborative Upgrade: Canvas and Audio Overviews Now Available

@Google DeepMind Blog //
Google is pushing the boundaries of AI and robotics with its Gemini AI models. Gemini Robotics, an advanced vision-language-action model, now enables robots to perform physical tasks with improved generalization, adaptability, and dexterity. This model interprets and acts on text, voice, and image data, showcasing Google's advancements in integrating AI for practical applications. Furthermore, the development of Gemini Robotics-ER, which incorporates embodied reasoning capabilities, signifies another step toward smarter, more adaptable robots.

Google's approach to robotics emphasizes safety, employing both physical and semantic safety systems. The company is inviting filmmakers and creators to experiment with the model to improve the design and development. Veo builds on years of generative video model work, including Generative Query Network(GQN),DVD-GAN,Imagen-Video,Phenaki,WALT,VideoPoetandLumiere— combining architecture, scaling laws and other novel techniques to improve quality and output resolution.

Recommended read:
References :
  • Google DeepMind Blog: Gemini Robotics brings AI into the physical world
  • Maginative: Google DeepMind Unveils Gemini Robotics Models to Bridge AI and Physical World
  • IEEE Spectrum: With Gemini Robotics, Google Aims for Smarter Robots
  • The Official Google Blog: Take a closer look at our new Gemini models for robotics.
  • THE DECODER: Google Deepmind unveils new AI models for robotic control
  • www.tomsguide.com: Google is putting it's Gemini 2.0 AI into robots — here's how it's going
  • Verdict: Google DeepMind unveils Gemini AI models for robotics
  • MarkTechPost: Google DeepMind’s Gemini Robotics: Unleashing Embodied AI with Zero-Shot Control and Enhanced Spatial Reasoning
  • LearnAI: Research Published 12 March 2025 Authors Carolina Parada Introducing Gemini Robotics, our Gemini 2.0-based model designed for robotics At Google DeepMind, we’ve been making progress in how our Gemini models solve complex problems through multimodal reasoning across text, images, audio and video. So far however, those abilities have been largely confined to the digital realm....
  • OODAloop: Google DeepMind unveils new AI models for robotic control.
  • www.producthunt.com: Gemini Robotics
  • Last Week in AI: Last Week in AI #303 - Gemini Robotics, Gemma 3, CSM-1B
  • Windows Copilot News: Google is prepping Gemini to take action inside of apps
  • Last Week in AI: Discusses Gemini Robotics in the context of general AI agents and robotics.
  • www.infoq.com: Google DeepMind unveils Gemini Robotics, an advanced AI model for enhancing robotics through vision, language, and action.
  • AI & Machine Learning: This article discusses how generative AI is poised to revolutionize multiplayer games, offering personalized experiences through dynamic narratives and environments. The article specifically mentions Google's Gemini AI model and its potential to enhance gameplay.
  • Gradient Flow: This podcast episode discusses various advancements in AI, including Google's Gemini Robotics and Gemma 3, as well as the evolving regulatory landscape across different countries.
  • Insight Partners: This article highlights Andrew Ng's keynote at ScaleUp:AI '24, where he discusses the exciting trends in AI agents and applications, mentioning Google's Gemini AI assistant and its role in driving innovation.
  • www.tomsguide.com: You can now use Google Gemini without an account — here's how to get started

@Dataconomy //
Google has enhanced the iOS experience by integrating Gemini AI with new lock screen widgets and control center access. iPhone users can now interact with Gemini directly from their lock screen, gaining quick access to Gemini Live and other tools without needing to unlock their devices. This update simplifies AI interactions on Apple's mobile platform, making it more accessible and convenient for users.

The new Gemini app widget allows instant access to the AI's voice chat feature, Gemini Live, by simply adding the widget to the lock screen and tapping it. Beyond voice chats, the update introduces three additional widgets: Camera Upload, allowing users to snap photos and send them to Gemini for analysis; Reminders & Calendar, for quickly setting events or tasks; and Text Chat, enabling immediate typed conversations. These widgets aim to streamline basic AI interactions, reducing the need to unlock the device.

Recommended read:
References :
  • The Tech Basic: iPhone users no longer need to unlock their devices to chat with Google’s AI. The tech giant just released an update letting you access Gemini Live and other tools directly from your lock screen.
  • PCMag Middle East ai: Gemini Will Soon Be Able to Answer Questions About What's on Your Screen
  • TechCrunch: You can now talk to Google Gemini from your iPhone’s lock screen
  • Digital Information World: New Update Makes Google Gemini Accessible Through the iPhone’s Lock Screen
  • MacStories: Gemini for iOS Gets Lock Screen Widgets, Control Center Integration, Basic Shortcuts Actions
  • 9to5Mac: Google announces ‘AI Mode’ as a new way to use Search, testing starts today
  • TestingCatalog: Google unveils AI Mode in Search Labs, powered by Gemini 2.0
  • www.tomsguide.com: Google launches 'AI Mode' for search — here's how to try it now
  • TestingCatalog: Google plans to release new Gemini models on March 12
  • Maginative: Details about Google introducing Gemini Embedding for enhanced AI.
  • Windows Copilot News: Google is building smart home controls into Gemini
  • Google Workspace Updates: Blog post about Gemini in the side panel of Workspace apps.

Sherri Wang@digitimes.com //
Samsung has officially unveiled its Galaxy S25 series, marking a significant shift in smartphone design. The new lineup emphasizes the integration of generative AI, aiming to provide a more intuitive and streamlined user experience. Key features include human-like AI agents and a unified AI platform, designed to simplify user tasks across various applications. The S25 Ultra, leading the series, features advanced camera technology, including a 200MP sensor, and AI-powered video capabilities. The S25 series also aims to seamlessly integrate across applications, allowing users to effortlessly move between tasks with AI-driven support, such as automatically integrating search results with calendars and reminders.

The Galaxy S25 series includes a focus on transparency and user safety, with the implementation of Content Credentials. This feature allows for easy identification of AI-generated images and content. This standard will be extended to video, audio, and documents to help users better understand the digital content they encounter. This move comes at a time where concerns about misinformation are a growing issue, and it sets a new standard for digital content authenticity. The S25 series is currently available for preorder with shipments scheduled to begin on February 7.

Recommended read:
References :
  • www.digitimes.com: GenAI smartphones 2.0: How Samsung integrates AI agents into mobile experience
  • www.digitimes.com: Samsung Galaxy S25 series debuts with new AI features and exclusive Snapdragon 8 Elite chipset
  • TechCrunch: Samsung Unpacked: Samsung’s Galaxy S25 will support Content Credentials to identify AI-generated images
  • TweakTown News: Samsung teases new Galaxy S25 Edge, new ultra-slim smartphone to compete with iPhone 17 Air
  • Dataconomy: What Samsung’s Galaxy S25 AI platform means for you
  • TechCrunch: Samsung Unpacked 2025: Samsung’s AI-focused Galaxy S25 Ultra ships February 7 for $1,300
  • www.engadget.com: Samsung unveils the $1.3K Galaxy S25 Ultra with a 6.9" QHD+ AMOLED screen, up from 6.8", 50MP sensor for the ultra-wide camera, a small S Pen downgrade, more
  • TechCrunch: Samsung Unpacked: Samsung’s Galaxy S25 arrives with a better Google Gemini
  • SiliconANGLE: Samsung debuts new Galaxy S25 smartphone series with Google’s Gemini AI assistant
  • www.androidpolice.com: Google says Galaxy S25 supports the Gemini Nano AI model and Samsung's TalkBack accessibility app is the first non-Google app to use the model
  • Techmeme: Samsung unveils the $1.3K Galaxy S25 Ultra with a 6.9" QHD+ AMOLED screen, up from 6.8", 50MP sensor for the ultra-wide camera, a small S Pen downgrade, more
  • IT-Online: Galaxy S25 puts AI centre stage
  • siliconangle.com: Samsung debuts new Galaxy S25 smartphone series with Google’s Gemini AI assistant
  • TweakTown News: Samsung unveils new Galaxy S25 series powered by Qualcomm Snapdragon 8 Elite, 12GB RAM, and AI
  • Techstrong.ai: Samsung’s New Smartphones Signal a Shift to AI Assistant Software Wars
  • it-online.co.za: Galaxy S25 puts AI centre stage
  • techstrong.ai: Samsung Aims for the Top Spot with Its Latest Lineup of AI Smartphones
  • Techstrong.ai: Samsung Electronics this week announced the Galaxy S25 – a faster, fancier line of AI smartphones – powered by the expanded Galaxy AI software, the latest Snapdragon 8 Elite and Google's Gemini.
  • www.cityam.com: Smartphone dominance: Samsung races Apple with new AI launch
  • City AM: Smartphone dominance: Samsung races Apple with new AI launch

@www.analyticsvidhya.com //
Google has announced the integration of its Gemini AI model into Google Workspace, making it available for all Business and Enterprise plan users. This move eliminates the need for add-ons and brings AI-powered tools to businesses of all sizes. The new features are now embedded across Workspace applications like Gmail, Docs, Sheets, and Meet. This allows users to leverage AI for tasks such as drafting documents, summarizing emails, and automatically generating meeting notes. Users can also interact with Gemini Advanced, which can assist with research, coding and data analysis.

The update was rolled out for Workspace Business customers on Thursday, with Enterprise customers gaining access later this month. In an effort to make AI accessible for every business, Google has simplified pricing, effectively bundling AI into the workspace plans. For example, a customer on the Business Standard plan with the Gemini Business add-on previously paying $32, will now be charged only $14 per user. The pricing changes take effect immediately for new customers, with existing customers transitioning on March 17th. Google has emphasized its commitment to data security and privacy, stating that user data, prompts, and generated responses will not be used to train Gemini models outside of their domain without permission.

Recommended read:
References :