@learn.aisingapore.org
//
Google DeepMind has unveiled Gemma 3n, a groundbreaking compact and highly efficient multimodal AI model designed for real-time, on-device use. This innovation addresses the rising demand for faster, smarter, and more private AI experiences directly on mobile devices such as phones, tablets, and laptops. By embedding intelligence locally, developers can unlock near-instant responsiveness, reduce memory demands, and enhance user privacy. Gemma 3n is optimized for Android and Chrome platforms, targeting performance across these mobile environments and serving as the foundation for the next version of Gemini Nano.
Gemma 3n leverages a novel Google DeepMind innovation called Per-Layer Embeddings (PLE), significantly reducing RAM usage. This technology allows the model to operate with a dynamic memory footprint of just 2GB and 3GB, even though its raw parameter count is 5B and 8B. This makes it possible to run larger models on mobile devices or live-stream from the cloud, with memory overhead comparable to smaller models. The model also incorporates advanced activation quantization and KVC sharing to improve on-device performance and efficiency, responding approximately 1.5x faster on mobile with significantly better quality compared to previous models.
In addition to Gemma 3n, Google is also experimenting with Gemini Diffusion, an innovative system that generates text using diffusion techniques rather than traditional word-by-word prediction. This approach, inspired by image generation, refines noise in multiple passes to create full sections of text. DeepMind says this leads to more consistent and logically connected output, making it particularly effective for tasks requiring precision, coherence, and iteration, such as code generation and text editing. Gemini Diffusion achieves speeds of up to 2,000 tokens per second on programming tasks, demonstrating its potential as a fast and competitive alternative to autoregressive models.
Recommended read:
References :
- Google DeepMind Blog: Announcing Gemma 3n preview: Powerful, efficient, mobile-first AI
- THE DECODER: Gemini Diffusion could be Google's most important I/O news that slipped under the radar
- www.marktechpost.com: Google DeepMind Releases Gemma 3n: A Compact, High-Efficiency Multimodal AI Model for Real-Time On-Device Use
- LearnAI: Following the exciting launches of Gemma 3 and Gemma 3 QAT, our family of state-of-the-art open models capable of running on a single cloud or desktop accelerator, weâre pushing our vision for accessible AI even further.
@developer.nvidia.com
//
NVIDIA is enhancing AI capabilities on RTX AI PCs with the introduction of a new plug-in builder for G-Assist. This innovative tool allows users to customize and expand G-Assist's functionality by integrating it with various Large Language Models (LLMs) and software applications. The plug-in builder is designed to enable users to generate AI-assisted functionalities through text and voice commands, effectively transforming G-Assist from a gaming-centric AI into a versatile tool adaptable to diverse applications, both gaming-related and otherwise.
The G-Assist plug-in builder facilitates the creation of custom commands and connections to external tools through APIs, allowing different software and services to communicate with each other. Developers can leverage coding languages like JSON and Python to create and integrate tools into G-Assist. NVIDIA has provided a GitHub repository with instructions and documentation for building and customizing these plug-ins, and users can even submit their creations for potential inclusion in the repository to share new capabilities with others. Examples of plug-in capabilities include seeking advice from AI assistants like Gemini on gaming strategies and using Twitch plug-ins to monitor streamer status via voice commands.
Furthermore, NVIDIA is advancing AI research and application across industries, demonstrated by their participation in the International Conference on Learning Representations (ICLR). NVIDIA Research presented over 70 papers at ICLR, showcasing developments in areas such as autonomous vehicles, healthcare, and multimodal content creation. Notably, researchers from University College London (UCL) are leveraging NVIDIA NIM microservices to benchmark agentic capabilities of AI models in gaming environments, highlighting the role of NIM in simplifying and accelerating the evaluation of AI reasoning in complex tasks. NIM microservices enable efficient deployment and scaling of AI models, supporting various platforms and workflows, making them a versatile solution for diverse research applications.
Recommended read:
References :
- blogs.nvidia.com: NVIDIA Research at ICLR — Pioneering the Next Wave of Multimodal Generative AI
- www.tomshardware.com: Nvidia introduces G-Assist plug-in builder, allowing its AI to integrate with LLMs and software
Chris McKay@Maginative
//
OpenAI has released its latest AI models, o3 and o4-mini, designed to enhance reasoning and tool use within ChatGPT. These models aim to provide users with smarter and faster AI experiences by leveraging web search, Python programming, visual analysis, and image generation. The models are designed to solve complex problems and perform tasks more efficiently, positioning OpenAI competitively in the rapidly evolving AI landscape. Greg Brockman from OpenAI noted the models "feel incredibly smart" and have the potential to positively impact daily life and solve challenging problems.
The o3 model stands out due to its ability to use tools independently, which enables more practical applications. The model determines when and how to utilize tools such as web search, file analysis, and image generation, thus reducing the need for users to specify tool usage with each query. The o3 model sets new standards for reasoning, particularly in coding, mathematics, and visual perception, and has achieved state-of-the-art performance on several competition benchmarks. The model excels in programming, business, consulting, and creative ideation.
Usage limits for these models vary, with o3 at 50 queries per week, and o4-mini at 150 queries per day, and o4-mini-high at 50 queries per day for Plus users, alongside 10 Deep Research queries per month. The o3 model is available to ChatGPT Pro and Team subscribers, while the o4-mini models are used across ChatGPT Plus. OpenAI says o3 is also beneficial in generating and critically evaluating novel hypotheses, especially in biology, mathematics, and engineering contexts.
Recommended read:
References :
- Simon Willison's Weblog: OpenAI are really emphasizing tool use with these: For the first time, our reasoning models can agentically use and combine every tool within ChatGPT—this includes searching the web, analyzing uploaded files and other data with Python, reasoning deeply about visual inputs, and even generating images. Critically, these models are trained to reason about when and how to use tools to produce detailed and thoughtful answers in the right output formats, typically in under a minute, to solve more complex problems.
- the-decoder.com: OpenAI’s new o3 and o4-mini models reason with images and tools
- venturebeat.com: OpenAI launches o3 and o4-mini, AI models that ‘think with images’ and use tools autonomously
- www.analyticsvidhya.com: o3 and o4-mini: OpenAI’s Most Advanced Reasoning Models
- www.tomsguide.com: OpenAI's o3 and o4-mini models
- Maginative: OpenAI’s latest models—o3 and o4-mini—introduce agentic reasoning, full tool integration, and multimodal thinking, setting a new bar for AI performance in both speed and sophistication.
- THE DECODER: OpenAI’s new o3 and o4-mini models reason with images and tools
- Analytics Vidhya: o3 and o4-mini: OpenAI’s Most Advanced Reasoning Models
- www.zdnet.com: These new models are the first to independently use all ChatGPT tools.
- The Tech Basic: OpenAI recently released its new AI models, o3 and o4-mini, to the public. Smart tools employ pictures to address problems through pictures, including sketch interpretation and photo restoration.
- thetechbasic.com: OpenAI’s new AI Can “See†and Solve Problems with Pictures
- www.marktechpost.com: OpenAI Introduces o3 and o4-mini: Progressing Towards Agentic AI with Enhanced Multimodal Reasoning
- MarkTechPost: OpenAI Introduces o3 and o4-mini: Progressing Towards Agentic AI with Enhanced Multimodal Reasoning
- analyticsindiamag.com: Access to o3 and o4-mini is rolling out today for ChatGPT Plus, Pro, and Team users.
- THE DECODER: OpenAI is expanding its o-series with two new language models featuring improved tool usage and strong performance on complex tasks.
- gHacks Technology News: OpenAI released its latest models, o3 and o4-mini, to enhance the performance and speed of ChatGPT in reasoning tasks.
- www.ghacks.net: OpenAI Launches o3 and o4-Mini models to improve ChatGPT's reasoning abilities
- Data Phoenix: OpenAI releases new reasoning models o3 and o4-mini amid intense competition. OpenAI has launched o3 and o4-mini, which combine sophisticated reasoning capabilities with comprehensive tool integration.
- Shelly Palmer: OpenAI Quietly Reshapes the Landscape with o3 and o4-mini. OpenAI just rolled out a major update to ChatGPT, quietly releasing three new models (o3, o4-mini, and o4-mini-high) that offer the most advanced reasoning capabilities the company has ever shipped.
- THE DECODER: Safety assessments show that OpenAI's o3 is probably the company's riskiest AI model to date
- shellypalmer.com: OpenAI Quietly Reshapes the Landscape with o3 and o4-mini
- BleepingComputer: OpenAI details ChatGPT-o3, o4-mini, o4-mini-high usage limits
- TestingCatalog: OpenAI’s o3 and o4‑mini bring smarter tools and faster reasoning to ChatGPT
- simonwillison.net: Introducing OpenAI o3 and o4-mini
- bdtechtalks.com: What to know about o3 and o4-mini, OpenAI’s new reasoning models
- bdtechtalks.com: What to know about o3 and o4-mini, OpenAI’s new reasoning models
- thezvi.wordpress.com: OpenAI has finally introduced us to the full o3 along with o4-mini. Greg Brockman (OpenAI): Just released o3 and o4-mini! These models feel incredibly smart. We’ve heard from top scientists that they produce useful novel ideas. Excited to see their …
- thezvi.wordpress.com: OpenAI has upgraded its entire suite of models. By all reports, they are back in the game for more than images. GPT-4.1 and especially GPT-4.1-mini are their new API non-reasoning models.
- felloai.com: OpenAI has just launched a brand-new series of GPT models—GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano—that promise major advances in coding, instruction following, and the ability to handle incredibly long contexts.
- Interconnects: OpenAI's o3: Over-optimization is back and weirder than ever
- www.ishir.com: OpenAI has released o3 and o4-mini, adding significant reasoning capabilities to its existing models. These advancements will likely transform the way users interact with AI-powered tools, making them more effective and versatile in tackling complex problems.
- www.bigdatawire.com: OpenAI released the models o3 and o4-mini that offer advanced reasoning capabilities, integrated with tool use, like web searches and code execution.
- Drew Breunig: OpenAI's o3 and o4-mini models offer enhanced reasoning capabilities in mathematical and coding tasks.
- TestingCatalog: OpenAI’s o3 and o4-mini bring smarter tools and faster reasoning to ChatGPT
- www.techradar.com: ChatGPT model matchup - I pitted OpenAI's o3, o4-mini, GPT-4o, and GPT-4.5 AI models against each other and the results surprised me
- www.techrepublic.com: OpenAI’s o3 and o4-mini models are available now to ChatGPT Plus, Pro, and Team users. Enterprise and education users will get access next week.
- Last Week in AI: OpenAI’s new GPT-4.1 AI models focus on coding, OpenAI launches a pair of AI reasoning models, o3 and o4-mini, Google’s newest Gemini AI model focuses on efficiency, and more!
- techcrunch.com: OpenAI’s new reasoning AI models hallucinate more.
- computational-intelligence.blogspot.com: OpenAI's new reasoning models, o3 and o4-mini, are a step up in certain capabilities compared to prior models, but their accuracy is being questioned due to increased instances of hallucinations.
- www.unite.ai: unite.ai article discussing OpenAI's o3 and o4-mini new possibilities through multimodal reasoning and integrated toolsets.
- Unite.AI: On April 16, 2025, OpenAI released upgraded versions of its advanced reasoning models.
- Digital Information World: OpenAI’s Latest o3 and o4-mini AI Models Disappoint Due to More Hallucinations than Older Models
- techcrunch.com: TechCrunch reports on OpenAI's GPT-4.1 models focusing on coding.
- Analytics Vidhya: o3 vs o4-mini vs Gemini 2.5 pro: The Ultimate Reasoning Battle
- THE DECODER: OpenAI's o3 achieves near-perfect performance on long context benchmark.
- the-decoder.com: OpenAI's o3 achieves near-perfect performance on long context benchmark
- www.analyticsvidhya.com: AI models keep getting smarter, but which one truly reasons under pressure? In this blog, we put o3, o4-mini, and Gemini 2.5 Pro through a series of intense challenges: physics puzzles, math problems, coding tasks, and real-world IQ tests.
- Simon Willison's Weblog: This post explores the use of OpenAI's o3 and o4-mini models for conversational AI, highlighting their ability to use tools in their reasoning process. It also discusses the concept of
- Simon Willison's Weblog: The benchmark score on OpenAI's internal PersonQA benchmark (as far as I can tell no further details of that evaluation have been shared) going from 0.16 for o1 to 0.33 for o3 is interesting, but I don't know if it it's interesting enough to produce dozens of headlines along the lines of "OpenAI's o3 and o4-mini hallucinate way higher than previous models"
- techstrong.ai: Techstrong.ai reports OpenAI o3, o4 Reasoning Models Have Some Kinks.
- www.marktechpost.com: OpenAI Releases a Practical Guide to Identifying and Scaling AI Use Cases in Enterprise Workflows
- Towards AI: OpenAI's o3 and o4-mini models have demonstrated promising improvements in reasoning tasks, particularly their use of tools in complex thought processes and enhanced reasoning capabilities.
- Analytics Vidhya: In this article, we explore how OpenAI's o3 reasoning model stands out in tasks demanding analytical thinking and multi-step problem solving, showcasing its capability in accessing and processing information through tools.
- pub.towardsai.net: TAI#149: OpenAI’s Agentic o3; New Open Weights Inference Optimized Models (DeepMind Gemma, Nvidia…
- composio.dev: OpenAI o3 vs. Gemini 2.5 Pro vs. o4-mini
- Composio: OpenAI o3 and o4-mini are out. They are two reasoning state-of-the-art models. They’re expensive, multimodal, and super efficient at tool use.
@Google DeepMind Blog
//
Google is integrating its Veo 2 video-generating AI model into Gemini Advanced, allowing subscribers to create short, cinematic videos from text prompts. The new feature, launched on April 15, 2025, enables Gemini Advanced users to generate 8-second, 720p videos in a 16:9 aspect ratio, suitable for sharing on platforms like TikTok and YouTube. These videos can be downloaded as MP4 files and include Google's SynthID watermark, ensuring transparency regarding AI-generated content. Currently, this offering is exclusively for Google One AI Premium subscribers and does not extend to Google Workspace business and educational plans.
Veo 2 is also being integrated into Whisk, an experimental tool within Google Labs. This integration includes a new feature called "Whisk Animate" that transforms uploaded images into animated video clips, also utilizing the Veo 2 model. Similar to Gemini, the video output in Whisk is limited to eight seconds and is accessible only to Premium subscribers. The integration of Veo 2 into Gemini Advanced and Whisk represents Google's efforts to compete with other AI video generation platforms.
Google's Veo 2 is designed to turn detailed text prompts into cinematic-quality videos with lifelike motion, natural physics, and visually rich scenes. The system is able to interpret detailed text prompts and turn them into fully animated clips with lifelike elements and a strong visual narrative. To ensure responsible use and transparency, Google employs its proprietary SynthID technology, which embeds an invisible watermark into each video frame. The company also implements red-teaming and additional review processes to prevent the creation of content that violates its content policies. The new video generation features are being rolled out globally and support all languages currently available in Gemini.
Recommended read:
References :
- Google DeepMind Blog: Generate videos in Gemini and Whisk with Veo 2
- PCMag Middle East ai: With Veo 2, videos are now free to produce for those on Advanced plans. The Whisk Animate tool also allows you to make images into 8-second videos using the same technology.
- TestingCatalog: Gemini Advanced subscribers can now generate videos with Veo 2
- THE DECODER: Google adds AI video generation to Gemini app and Whisk experiment
- Analytics Vidhya: 3 Ways to Access Google Veo 2
- www.tomsguide.com: I just tried Google's newest AI video generation features — and I'm blown away
- www.analyticsvidhya.com: 3 Ways to Access Google Veo 2
- LearnAI: Starting today, Gemini Advanced users can generate and share videos using our state-of-the-art video model, Veo 2. In Gemini, you can now translate text-based prompts into dynamic videos. Google Labs is also making Veo 2 available through Whisk, a generative AI experiment that allows you to create new images using both text and image prompts,...
- www.tomsguide.com: Google rolls out Google Photos extension for Gemini — here’s what it can do
- eWEEK: Gemini Advanced users can now create and share high-resolution videos with its newly released Veo 2.
- Data Phoenix: Google introduces Veo 2 for video generation in Gemini and Whisk
@Google DeepMind Blog
//
Google is expanding its AI video generation capabilities by integrating Veo 2, its most advanced generative video model, into the Gemini app and the experimental Whisk platform. This new functionality allows users to create short, high-resolution videos directly from text prompts, opening up new avenues for creative expression and content creation. Veo 2 is designed to produce realistic motion, natural physics, and visually rich scenes, making it a powerful tool for generating cinematic-quality content.
Currently, access to Veo 2 is primarily available to Google One AI Premium subscribers, who can generate eight-second, 720p videos in MP4 format within Gemini Advanced. The Whisk platform also incorporates Veo 2 through its "Whisk Animate" feature, enabling users to transform uploaded images into animated video clips. Google emphasizes that more detailed and descriptive text prompts generally yield better results, allowing users to fine-tune their creations and explore a wide range of styles, from realistic nature scenes to stylized and surreal sequences.
To ensure responsible AI development, Google is implementing several safeguards. All AI-generated videos created with Veo 2 will feature an invisible watermark embedded using SynthID technology, helping to identify them as AI-generated. Additionally, Google is employing red-teaming and review processes to prevent the creation of content that violates its policies. These new video generation features are being rolled out globally and support all languages currently available in Gemini, although standard Gemini users do not have access at this time.
Recommended read:
References :
- The Official Google Blog: Video showcasing how you can generate videos in Gemini
- chromeunboxed.com: Google has announced a significant upgrade to its AI video generation capabilities, integrating the powerful Veo 2 model into both Gemini Advanced and Whisk.
- Google DeepMind Blog: Transform text-based prompts into high-resolution eight-second videos in Gemini Advanced and use Whisk Animate to turn images into eight-second animated clips.
- PCMag Middle East ai: A new model called DolphinGemma can analyze sounds and put together sequences, accelerating decades-long research projects. Google is collaborating with researchers to learn how to decode dolphin vocalizations "in the quest for interspecies communication."
- www.tomsguide.com: I just tried Google's newest AI video generation features — and I'm blown away
- blog.google: Google's DolphinGemma AI model aims to decode dolphin communication, potentially leading to interspecies communication.
- PCMag Middle East ai: Google's Gemini Advanced now offers free 8-second video clip generation with Veo 2, and image-to-video animation with Whisk Animate.
- www.analyticsvidhya.com: Google's new Veo 2 model lets you create cinematic-quality videos from detailed text prompts.
- www.artificialintelligence-news.com: Google's AI model, DolphinGemma, is designed to interpret and generate dolphin sounds, potentially paving the way for interspecies communication.
- THE DECODER: Google adds AI video generation to Gemini app and Whisk experiment
- TestingCatalog: Perplexity adds Gemini 2.5 Pro and voice mode to web platform
- LearnAI: Try generating video in Gemini, powered by Veo 2
- TestingCatalog: Gemini Advanced subscribers can now generate videos with Veo 2
- Analytics Vidhya: Designed to turn detailed text prompts into cinematic-quality videos, Google Veo 2 creates lifelike motion, natural physics, and visually rich scenes across a range of styles. Currently, Google Veo 2 is available only to users in the United States, aged 18 and […] The post appeared first on .
- Analytics India Magazine: Google Rolls Out Video AI Model for Gemini Users, Developers
- shellypalmer.com: Google’s Veo is Almost Here
- www.tomsguide.com: Google rolls out Google Photos extension for Gemini — here’s what it can do
- venturebeat.com: VentureBeat reports on Google’s Gemini 2.5 Flash introduces adjustable ‘thinking budgets’ that cut AI costs by 600% when turned down
- eWEEK: Google’s AI Video Generator Veo 2 Delivers Cinematic Results
- TestingCatalog: Google launches Gemini 2.5 Flash model with hybrid reasoning
- the-decoder.com: Google is rolling out new AI-powered video generation features in its Gemini app and the experimental tool Whisk.
- Glenn Gabe: Smart move by Google. They are offering Google One AI Premium for FREE to college students through the spring of 2026 Gives you access to 2 TB of storage and incredible AI models, like Gemini 2.5 Pro and Veo 2, via these products: *Gemini Advanced, including Deep Research, Gemini Live, Canvas, and video generation with Veo 2 *NotebookLM Plus, including five times more Audio Overviews, notebooks and more *Gemini in Google Docs, Sheets and Slides
- bsky.app: Gemini 2.5 Pro and Flash now have the ability to return image segmentation masks on command, as base64 encoded PNGs embedded in JSON strings I vibe coded an interactive tool for exploring this new capability - it costs a fraction of a cent per image https://simonwillison.net/2025/Apr/18/gemini-image-segmentation/
- Google DeepMind Blog: Introducing Gemini 2.5 Flash
- www.marketingaiinstitute.com: Google Cloud just wrapped its Next ‘25 event in Las Vegas, , spanning everything from advanced AI models to new ways of connecting your favorite tools with Google’s agentic ecosystem.
- aigptjournal.com: Google Veo 2: The Future of Effortless AI Video Creation for Everyone
- Last Week in AI: LWiAI Podcast #207 - GPT 4.1, Gemini 2.5 Flash, Ironwood, Claude Max
- learn.aisingapore.org: Introducing Gemini 2.5 Flash
- Data Phoenix: Google introduces Veo 2 for video generation in Gemini and Whisk
@www.thecanadianpressnews.ca
//
Meta Platforms, the parent company of Facebook and Instagram, has announced it will resume using publicly available content from European users to train its artificial intelligence models. This decision comes after a pause last year following privacy concerns raised by activists. Meta plans to use public posts, comments, and interactions with Meta AI from adult users in the European Union to enhance its generative AI models. The company says this data is crucial for developing AI that understands the nuances of European languages, dialects, colloquialisms, humor, and local knowledge.
Meta emphasizes that it will not use private messages or data from users under 18 for AI training. To address privacy concerns, Meta will notify EU users through in-app and email notifications, providing them with a way to opt out of having their data used. These notifications will include a link to a form allowing users to object to the use of their data, and Meta has committed to honoring all previously and newly submitted objection forms. The company states its AI is designed to cater to diverse perspectives and to acknowledge the distinctive attributes of various European communities.
Meta claims its approach aligns with industry practices, noting that companies like Google and OpenAI have already utilized European user data for AI training. Meta defends its actions as necessary to develop AI services that are relevant and beneficial to European users. While Meta highlights that a panel of EU privacy regulators “affirmed” that its original approach met legal obligations. Groups like NOYB had previously complained and urged regulators to intervene, advocating for an opt-in system where users actively consent to the use of their data for AI training.
Recommended read:
References :
- cyberinsider.com: Meta has announced it will soon begin using public data from adult users in the European Union — including posts, comments, and AI interactions — to train its generative AI models, raising concerns about the boundaries of consent and user awareness across its major platforms.
- discuss.privacyguides.net: Meta to start training its AI models on public content in the EU after Est. reading time: 3 minutes If you are an EU resident with an Instagram or Facebook account, you should know that Meta will start training its AI models on your posted content.
- Malwarebytes: Meta users in Europe will have their public posts swept up and ingested for AI training, the company announced this week.
- : Meta says it will start using publicly available content from European users to train its artificial intelligence models, resuming work put on hold last year after activists raised concerns about data privacy.
- bsky.app: Meta announced today that it will soon start training its artificial intelligence models using content shared by European adult users on its Facebook and Instagram social media platforms. https://www.bleepingcomputer.com/news/technology/meta-to-resume-ai-training-on-content-shared-by-europeans/
- BleepingComputer: Meta to resume AI training on content shared by Europeans
- oodaloop.com: Meta says it will resume AI training with public content from European users
- BleepingComputer: Meta announced today that it will soon start training its artificial intelligence models using content shared by European adult users on its Facebook and Instagram social media platforms.
- techxplore.com: Social media company Meta said Monday that it will start using publicly available content from European users to train its artificial intelligence models, resuming work put on hold last year after activists raised concerns about data privacy.
- finance.yahoo.com: Meta says it will resume AI training with public content from European users
- www.theverge.com: Reports on Meta resuming AI training with public content from European users.
- The Hacker News: Meta Resumes E.U. AI Training Using Public User Data After Regulator Approval
- www.socialmediatoday.com: Meta Begins Training its AI Tools on EU User Data
- Meta Newsroom: Today, we’re announcing our plans to train AI at Meta using public content —like public posts and comments— shared by adults on our products in the EU.
- Synced: Meta’s Novel Architectures Spark Debate on the Future of Large Language Models
- securityaffairs.com: Meta will use public EU user data to train its AI models
- about.fb.com: Today, we’re announcing our plans to train AI at Meta using public content —like public posts and comments— shared by adults on our products in the EU. People’s interactions with Meta AI – like questions and queries – will also be used to train and improve our models.
- www.bitdegree.org: Meta Cleared to Train AI with Public Posts in the EU
- MEDIANAMA: Meta to begin using EU users’ data to train AI models
- www.medianama.com: Meta to begin using EU users’ data to train AI models
- The Register - Software: Meta to feed Europe's public posts into AI brains again
- www.artificialintelligence-news.com: Meta will train AI models using EU user data
- AI News: Meta will train AI models using EU user data
- techxmedia.com: Meta announced it will use public posts and comments from adult EU users to train its AI models, ensuring compliance with EU regulations.
- Digital Information World: Despite all the controversy that arose, tech giant Meta is now preparing to train its AI systems on data belonging to Facebook and Instagram users in the EU.
- TechCrunch: Meta will start training its AI models on public content in the EU
Maximilian Schreiner@THE DECODER
//
Google has unveiled Gemini 2.5 Pro, its latest and "most intelligent" AI model to date, showcasing significant advancements in reasoning, coding proficiency, and multimodal functionalities. According to Google, these improvements come from combining a significantly enhanced base model with improved post-training techniques. The model is designed to analyze complex information, incorporate contextual nuances, and draw logical conclusions with unprecedented accuracy. Gemini 2.5 Pro is now available for Gemini Advanced users and on Google's AI Studio.
Google emphasizes the model's "thinking" capabilities, achieved through chain-of-thought reasoning, which allows it to break down complex tasks into multiple steps and reason through them before responding. This new model can handle multimodal input from text, audio, images, videos, and large datasets. Additionally, Gemini 2.5 Pro exhibits strong performance in coding tasks, surpassing Gemini 2.0 in specific benchmarks and excelling at creating visually compelling web apps and agentic code applications. The model also achieved 18.8% on Humanity’s Last Exam, demonstrating its ability to handle complex knowledge-based questions.
Recommended read:
References :
- SiliconANGLE: Google LLC said today it’s updating its flagship Gemini artificial intelligence model family by introducing an experimental Gemini 2.5 Pro version.
- The Tech Basic: Google's New AI Models “Think” Before Answering, Outperform Rivals
- AI News | VentureBeat: Google releases ‘most intelligent model to date,’ Gemini 2.5 Pro
- Analytics Vidhya: We Tried the Google 2.5 Pro Experimental Model and It’s Mind-Blowing!
- www.tomsguide.com: Google unveils Gemini 2.5 — claims AI breakthrough with enhanced reasoning and multimodal power
- Google DeepMind Blog: Gemini 2.5: Our most intelligent AI model
- THE DECODER: Google Deepmind has introduced Gemini 2.5 Pro, which the company describes as its most capable AI model to date. The article appeared first on .
- intelligence-artificielle.developpez.com: Google DeepMind a lancé Gemini 2.5 Pro, un modèle d'IA qui raisonne avant de répondre, affirmant qu'il est le meilleur sur plusieurs critères de référence en matière de raisonnement et de codage
- The Tech Portal: Google unveils Gemini 2.5, its most intelligent AI model yet with ‘built-in thinking’
- Ars OpenForum: Google says the new Gemini 2.5 Pro model is its “smartest†AI yet
- The Official Google Blog: Gemini 2.5: Our most intelligent AI model
- www.techradar.com: I pitted Gemini 2.5 Pro against ChatGPT o3-mini to find out which AI reasoning model is best
- bsky.app: Google's AI comeback is official. Gemini 2.5 Pro Experimental leads in benchmarks for coding, math, science, writing, instruction following, and more, ahead of OpenAI's o3-mini, OpenAI's GPT-4.5, Anthropic's Claude 3.7, xAI's Grok 3, and DeepSeek's R1. The narrative has finally shifted.
- Shelly Palmer: Google’s Gemini 2.5: AI That Thinks Before It Speaks
- bdtechtalks.com: Gemini 2.5 Pro is a new reasoning model that excels in long-context tasks and benchmarks, revitalizing Google’s AI strategy against competitors like OpenAI.
- Interconnects: The end of a busy spring of model improvements and what's next for the presumed leader in AI abilities.
- www.techradar.com: Gemini 2.5 is now available for Advanced users and it seriously improves Google’s AI reasoning
- www.zdnet.com: Google releases 'most intelligent' experimental Gemini 2.5 Pro - here's how to try it
- Unite.AI: Gemini 2.5 Pro is Here—And it Changes the AI Game (Again)
- TestingCatalog: Gemini 2.5 Pro sets new AI benchmark and launches on AI Studio and Gemini
- Analytics Vidhya: Google DeepMind's latest AI model, Gemini 2.5 Pro, has reached the #1 position on the Arena leaderboard.
- AI News: Gemini 2.5: Google cooks up its ‘most intelligent’ AI model to date
- Fello AI: Google’s Gemini 2.5 Shocks the World: Crushing AI Benchmark Like No Other AI Model!
- Analytics India Magazine: Google Unveils Gemini 2.5, Crushes OpenAI GPT-4.5, DeepSeek R1, & Claude 3.7 Sonnet
- Practical Technology: Practical Tech covers the launch of Google's Gemini 2.5 Pro and its new AI benchmark achievements.
- Shelly Palmer: Google's Gemini 2.5: AI That Thinks Before It Speaks
- www.producthunt.com: Google's most intelligent AI model
- Windows Copilot News: Google reveals AI ‘reasoning’ model that ‘explicitly shows its thoughts’
- AI News | VentureBeat: Hands on with Gemini 2.5 Pro: why it might be the most useful reasoning model yet
- thezvi.wordpress.com: Gemini 2.5 Pro Experimental is America’s next top large language model. That doesn’t mean it is the best model for everything. In particular, it’s still Gemini, so it still is a proud member of the Fun Police, in terms of …
- www.computerworld.com: Gemini 2.5 can, among other things, analyze information, draw logical conclusions, take context into account, and make informed decisions.
- www.infoworld.com: Google introduces Gemini 2.5 reasoning models
- Maginative: Google's Gemini 2.5 Pro leads AI benchmarks with enhanced reasoning capabilities, positioning it ahead of competing models from OpenAI and others.
- www.infoq.com: Google's Gemini 2.5 Pro is a powerful new AI model that's quickly becoming a favorite among developers and researchers. It's capable of advanced reasoning and excels in complex tasks.
- AI News | VentureBeat: Google’s Gemini 2.5 Pro is the smartest model you’re not using – and 4 reasons it matters for enterprise AI
- Communications of the ACM: Google has released Gemini 2.5 Pro, an updated AI model focused on enhanced reasoning, code generation, and multimodal processing.
- The Next Web: Google has released Gemini 2.5 Pro, an updated AI model focused on enhanced reasoning, code generation, and multimodal processing.
- www.tomsguide.com: Gemini 2.5 Pro is now free to all users in surprise move
- Composio: Google just launched Gemini 2.5 Pro on March 26th, claiming to be the best in coding, reasoning and overall everything. But I The post appeared first on .
- Composio: Google's Gemini 2.5 Pro, released on March 26th, is being hailed for its enhanced reasoning, coding, and multimodal capabilities.
- Analytics India Magazine: Gemini 2.5 Pro is better than the Claude 3.7 Sonnet for coding in the Aider Polyglot leaderboard.
- www.zdnet.com: Gemini's latest model outperforms OpenAI's o3 mini and Anthropic's Claude 3.7 Sonnet on the latest benchmarks. Here's how to try it.
- www.marketingaiinstitute.com: [The AI Show Episode 142]: ChatGPT’s New Image Generator, Studio Ghibli Craze and Backlash, Gemini 2.5, OpenAI Academy, 4o Updates, Vibe Marketing & xAI Acquires X
- www.tomsguide.com: Gemini 2.5 is free, but can it beat DeepSeek?
- www.tomsguide.com: Google Gemini could soon help your kids with their homework — here’s what we know
- PCWorld: Google’s latest Gemini 2.5 Pro AI model is now free for all users
- www.techradar.com: Google just made Gemini 2.5 Pro Experimental free for everyone, and that's awesome.
- Last Week in AI: #205 - Gemini 2.5, ChatGPT Image Gen, Thoughts of LLMs
mpesce@Windows Copilot News
//
Google is advancing its AI capabilities on multiple fronts, emphasizing both security and performance. The company is integrating Google Cloud Champion Innovators into the Google Developer Experts (GDE) program, creating a unified community of over 1,400 members. This consolidation aims to enhance collaboration, streamline resources, and amplify the impact of passionate experts, providing a stronger voice for developers within Google and the broader industry.
Google is also pushing forward with its Gemini AI model, with the plan for Gemini 2.0 to be implemented across Google's products. Researchers from Google and UC Berkeley have found that a simple test-time scaling approach, based on sampling-based search, can significantly boost the reasoning abilities of large language models (LLMs). This method uses random sampling and self-verification to improve model performance, potentially outperforming more complex and specialized training methods.
Recommended read:
References :
- AI News | VentureBeat: Less is more: UC Berkeley and Google unlock LLM potential through simple sampling
- Windows Copilot News: Google launched Gemini 2.0, its new AI model for practically everything
- Security & Identity: This article discusses Mastering secure AI on Google Cloud, a practical guide for enterprises
@Google DeepMind Blog
//
Google is preparing to unveil significant AI advancements, with speculation pointing towards enhancements to its Gemini model. Rumors suggest a potential update to Gemini 2.0 Pro, possibly named "Nebula," which has been observed performing well on specific prompts. This new model is expected to incorporate advanced reasoning capabilities, adding a new layer of sophistication to Google's AI offerings.
Google's strategy involves integrating AI into various facets of its services, which is evident by the official rollout of its Data Science Agent to most Colab users for free. Gemini 2.0 is designed to be universally applied across Google's products. It will enhance AI Overviews in Google Search, which now serve one billion users, by making them more nuanced and complex. Additionally, live video and screen sharing are being rolled out to Gemini Live, improving the models features.
Recommended read:
References :
- Google DeepMind Blog: Google DeepMind introduced Gemini 1.5, a new model family boasting enhanced speed and efficiency for tasks such as real-time assistants and collaborations.
- www.tomsguide.com: Google unveiled Gemini 1.5, a new model family with enhanced capabilities, particularly in speed and context length.
- Maginative: Gemini App Gets a Major Upgrade: Canvas Mode, Audio Overviews, and More
- TestingCatalog: Discover Google Gemini's new Canvas and Audio Overview features, enhancing productivity in content creation and coding. Available globally for Gemini subscribers.
- Google Workspace Updates: Try Canvas, a new way to collaborate with the Gemini app
- www.techrepublic.com: Google boosts Gemini with Canvas and Audio Overview, offering real-time editing and podcast-style audio insights to power creative projects.
- AI & Machine Learning: Google's Gemini 1.5 models exhibited strong performance in chatbot capabilities, alongside generative AI innovations.
- AI Rabbit Blog: A news article describing how to use Google's Gemini AI to extract travel information from YouTube videos and generate routes and points of interest.
- Google DeepMind Blog: Today, we’re announcing Gemini 2.0, our most capable multimodal AI model yet.
- Windows Copilot News: This article discusses Google launching Gemini 2.0, its new AI model for practically everything.
- Windows Copilot News: Gemini AI can now summarize what’s in your Google Drive folders
- gHacks Technology News: Google has begun rolling out new features for its AI assistant, Gemini, enabling real-time interaction through live video and screen sharing.
- Google Workspace Updates: Get started with Gemini in Google Drive quickly with new “nudgesâ€
- Analytics India Magazine: Google is Rolling Out Live Video and Screen Sharing to Gemini Live
- LearnAI: Google’s Data Science Agent: Can It Really Do Your Job?
- TestingCatalog: Evidence mounts for Google to reveal a new Gemini model with agentic use case this week
- NextBigFuture.com: Google Gemini 2.5 Pro is the Top AI Model
- AI News | VentureBeat: Google releases ‘most intelligent model to date,’ Gemini 2.5 Pro
- Analytics Vidhya: We Tried the Google 2.5 Pro Experimental Model and It’s Mind-Blowing!
- www.tomsguide.com: Google unveils Gemini 2.5 — claims AI breakthrough with enhanced reasoning and multimodal power
- www.zdnet.com: Google releases 'most intelligent' experimental Gemini 2.5 Pro - here's how to try it
- The Official Google Blog: Google has released Gemini 2.5, our most intelligent AI model.
- MarkTechPost: Google AI Released Gemini 2.5 Pro Experimental: An Advanced AI Model that Excels in Reasoning, Coding, and Multimodal Capabilities
- Analytics India Magazine: Google's Gemini 2.5 Pro has demonstrated exceptional performance and capabilities in a wide range of tasks, positioning itself as a frontrunner in the AI landscape.
- Dataconomy: Google DeepMind unveiled Gemini 2.5 on March 25, 2025, calling it their most intelligent AI model yet.
- The Tech Basic: Google’s New AI Models “Think†Before Answering, Outperform Rivals
- The Verge: Google says its new ‘reasoning’ Gemini AI models are the best ones yet
- SiliconANGLE: Google introduces Gemini 2.5 Pro with chain-of-thought reasoning built-in.
- www.techradar.com: Google just announced Gemini 2.5 and it's the best AI reasoning model we've seen yet.
- Google DeepMind Blog: Gemini 2.5 is our most intelligent AI model, now with thinking built in.
- Shelly Palmer: Google unveiled Gemini 2.5 yesterday, marking their most significant advancement in AI reasoning models to date. The new family of AI models pauses to "think" before answering questions – a capability that puts Google in feature parity with OpenAI's "o" series, Deepseek's R series, Anthropic, xAI, and other reasoning models.
- THE DECODER: Gemini 2.5 Pro: Google has finally caught up
- TestingCatalog: Gemini 2.5 Pro sets new AI benchmark and launches on AI Studio and Gemini
- intelligence-artificielle.developpez.com: Google DeepMind a lancé Gemini 2.5 Pro, un modèle d'IA qui raisonne avant de répondre, affirmant qu'il est le meilleur sur plusieurs critères de référence en matière de raisonnement et de codage
- www.techrepublic.com: Google’s Gemini 2.5 Pro is Better at Coding, Math & Science Than Your Favourite AI Model
- Interconnects: The end of a busy spring of model improvements and what's next for the presumed leader in AI abilities.
- Maginative: The Gemini 2.5 Pro model, released recently by Google, has shown exceptional reasoning skills in various benchmarks.
- Ars OpenForum: Google says the new Gemini 2.5 Pro model is its “smartest� AI yet
- www.bitdegree.org: On March 25, Google Gemini 2.5 Pro, the newest version of its artificial intelligence (AI) model, a few months after Gemini 2.0.
- www.computerworld.com: Google is beating the drum for Gemini 2.5, a new AI model that reportedly offers better performance than similar reasoning models from competitors such as OpenAI, Anthropic and Deepseek.
- bdtechtalks.com: Gemini 2.5 Pro is a new reasoning model that excels in long-context tasks and benchmarks, revitalizing Google’s AI strategy against competitors like OpenAI.
|
|