News from the AI & ML world

DeeperML - #artificialintelligence

@www.aiwire.net //
References: AIwire , www.aiwire.net ,
The Quantum Economic Development Consortium (QED-C) has released a report detailing the potential synergies between Quantum Computing (QC) and Artificial Intelligence (AI). The report, based on a workshop, highlights how these two technologies can work together to solve problems currently beyond the reach of classical computing. AI could be used to accelerate circuit design, application development, and error correction in QC. Conversely, QC offers the potential to enhance AI models by efficiently solving complex optimization and probabilistic tasks, which are infeasible for classical systems.

A hybrid approach, integrating the strengths of classical AI methods with QC algorithms, is expected to substantially reduce algorithmic complexity and improve the efficiency of computational processes and resource allocation. The report identifies key areas where this integration can yield significant benefits, including chemistry, materials science, logistics, energy, and environmental modeling. The applications could range from predicting high-impact weather events to improving the modeling of chemical reactions for pharmaceutical advancements.

The report also acknowledges the necessity of cross-industry collaboration, expanded academic research, and increased federal support to advance QC + AI development. Celia Merzbacher, Executive Director of QED-C, emphasized the importance of collaboration between industry, academia, and governments to maximize the potential of these technologies. A House Science Committee hearing is scheduled to assess the progress of the National Quantum Initiative, underscoring the growing importance of quantum technologies in the U.S.

Recommended read:
References :
  • AIwire: QED-C Workshop Identifies Quantum AI Targets
  • www.aiwire.net: QED-C Workshop Identifies Quantum AI Targets
  • thequantuminsider.com: QED-C Report Identifies Use Cases at the Intersection of Quantum Computing and Artificial Intelligence

@www.microsoft.com //
Microsoft is actively exploring the potential of artificial intelligence to revolutionize fusion energy research. This initiative aims to accelerate the development of a clean, scalable, and virtually limitless energy source. The first Microsoft Research Fusion Summit recently convened global experts to discuss and explore how AI can play a pivotal role in unlocking the secrets of fusion power. This summit fostered collaborations with leading institutions and researchers, with the ultimate goal of expediting progress toward practical fusion energy generation.

The summit showcased ongoing efforts to apply AI in various aspects of fusion research. Experts from the DIII-D National Fusion Program, North America's largest fusion facility, demonstrated how AI is already being used to advance reactor design and operations. These applications include using AI for active plasma control to prevent disruptive instabilities, implementing AI-controlled trajectories to avoid tearing modes, and utilizing machine learning-derived density limits for safer, high-density operations.

Microsoft believes that AI can significantly shorten the timeline for realizing nuclear fusion as a viable energy source. This advancement, in turn, could provide the immense power required to fuel the growing demands of AI itself. Ashley Llorens, Corporate Vice President and Managing Director of Microsoft Research Accelerator, envisions a self-reinforcing system where AI drives sustainability, including the development of fusion energy. The challenge now lies in harnessing the combined power of AI and high-performance computing, along with international collaboration, to model and optimize future fusion reactor designs.

Recommended read:
References :
  • The Register - Software: Microsoft believes AI can hasten development of nuclear fusion as a practical energy source, which could in turn accelerate answers to the question of how to power AI.…
  • www.microsoft.com: The first Microsoft Research Fusion Summit brought together global experts to explore how AI can help unlock the potential of fusion energy. Discover how collaborations with leading institutions can help speed progress toward clean, scalable energy. The post appeared first on .

Evan Ackerman@IEEE Spectrum //
Amazon is enhancing its warehouse operations with the introduction of Vulcan, a new robot equipped with a sense of touch. This advancement is aimed at improving the efficiency of picking and handling packages within its fulfillment centers. The Vulcan robot, armed with gripping pincers, built-in conveyor belts, and a pointed probe, is designed to handle 75% of the package types encountered in Amazon's warehouses. This new capability represents a "fundamental leap forward in robotics," according to Aaron Parness, Amazon’s director of applied science, as it enables the robot to "feel" the objects it's interacting with, a feature previously unattainable for Amazon's robots.

Vulcan's sense of touch allows it to navigate the challenges of picking items from cluttered bins, mastering what some call 'bin etiquette'. Unlike older robots, which Parness describes as "numb and dumb" because of a lack of sensors, Vulcan can measure grip strength and gently push surrounding objects out of the way. This ensures that it remains below the damage threshold when handling items, a critical improvement for retrieving items from the small fabric pods Amazon uses to store inventory in fulfillment centers. These pods contain up to 10 items within compartments that are only about one foot square, posing a challenge for robots without the finesse to remove a single object without damaging others.

Amazon claims that Vulcan's introduction is made possible through key advancements in robotics, engineering, and physical artificial intelligence. While the company did not specify the exact number of jobs Vulcan may create or displace, it emphasized that its robotics systems have historically led to the creation of new job categories focused on training, operating, and maintaining the robots. Vulcan, with its enhanced capabilities, is poised to significantly impact Amazon's ability to manage the 400 million SKUs at a typical fulfillment center, promising increased efficiency and reduced risk of damage to items.

Recommended read:
References :
  • IEEE Spectrum: At an event in Dortmund, Germany today, Amazon announced a new robotic system called Vulcan, which the company is calling “its first robotic system with a genuine sense of touch—designed to transform how robots interact with the physical world.â€
  • techstrong.ai: Amazon’s Vulcan Has the ‘Touch’ to Handle Most Packages
  • IEEE Spectrum: Amazon’s Vulcan Robots Are Mastering Picking Packages
  • www.eweek.com: Amazon’s Vulcan Robot with Sense of Touch: ‘Fundamental Leap Forward in Robotics’
  • analyticsindiamag.com: Amazon Unveils Vulcan, Its First Robot With a Sense of Touch

Evan Ackerman@IEEE Spectrum //
References: betanews.com , IEEE Spectrum , BetaNews ...
Amazon has unveiled Vulcan, an AI-powered robot with a sense of touch, designed for use in its fulfillment centers. This groundbreaking robot represents a "fundamental leap forward in robotics," according to Amazon's director of applied science, Aaron Parness. Vulcan is equipped with sensors that allow it to "feel" the objects it is handling, enabling capabilities previously unattainable for Amazon robots. This sense of touch allows Vulcan to manipulate objects with greater dexterity and avoid damaging them or other items nearby.

Vulcan operates using "end of arm tooling" that includes force feedback sensors. These sensors enable the robot to understand how hard it is pushing or holding an object, ensuring it remains below the damage threshold. Amazon says that Vulcan can easily manipulate objects to make room for whatever it’s stowing, because it knows when it makes contact and how much force it’s applying. Vulcan helps to bridge the gap between humans and robots, bringing greater dexterity to the devices.

The introduction of Vulcan addresses a significant challenge in Amazon's fulfillment centers, where the company handles a vast number of stock-keeping units (SKUs). While robots already play a crucial role in completing 75% of Amazon orders, Vulcan fills the ability gap of previous generations of robots. According to Amazon, one business per second is adopting AI, and Vulcan demonstrates the potential for AI and robotics to revolutionize warehouse operations. Amazon did not specify how many jobs the Vulcan model may create or displace.

Recommended read:
References :
  • betanews.com: Amazon unveils Vulcan, a package sorting, AI-powered robot with a sense of touch
  • IEEE Spectrum: Amazon’s Vulcan Robots Now Stow Items Faster Than Humans
  • www.linkedin.com: Amazon’s Vulcan Robots Are Mastering Picking Packages
  • BetaNews: Amazon has unveiled Vulcan, a package sorting, AI-powered robot with a sense of touch
  • techstrong.ai: Amazon’s Vulcan Has the ‘Touch’ to Handle Most Packages
  • eWEEK: Amazon’s Vulcan Robot with Sense of Touch: ‘Fundamental Leap Forward in Robotics’
  • www.eweek.com: Amazon’s Vulcan Robot with Sense of Touch: ‘Fundamental Leap Forward in Robotics’
  • techstrong.ai: Amazon’s Vulcan Has the ‘Touch’ to Handle Most Packages
  • IEEE Spectrum: Amazon’s Vulcan Robots Are Mastering Picking Packages
  • Dataconomy: This Amazon robot has a sense of feel
  • The Register: Amazon touts Vulcan – its first robot with a sense of 'touch'

@www.microsoft.com //
Microsoft Research is delving into the transformative potential of AI as "Tools for Thought," aiming to redefine AI's role in supporting human cognition. At the upcoming CHI 2025 conference, researchers will present four new research papers and co-host a workshop exploring this intersection of AI and human thinking. The research includes a study on how AI is changing the way we think and work along with three prototype systems designed to support different cognitive tasks. The goal is to explore how AI systems can be used as Tools for Thought and reimagine AI’s role in human thinking.

As AI tools become increasingly capable, Microsoft has unveiled new AI agents designed to enhance productivity in various domains. The "Researcher" agent can tackle complex research tasks by analyzing work data, emails, meetings, files, chats, and web information to deliver expertise on demand. Meanwhile, the "Analyst" agent functions as a virtual data scientist, capable of processing raw data from multiple spreadsheets to forecast demand or visualize customer purchasing patterns. The new AI agents unveiled over the past few weeks can help people every day with things like research, cybersecurity and more.

Johnson & Johnson has reportedly found that only a small percentage, between 10% and 15%, of AI use cases deliver the vast majority (80%) of the value. After encouraging employees to experiment with AI and tracking the results of nearly 900 use cases over about three years, the company is now focusing resources on the highest-value projects. These high-value applications include a generative AI copilot for sales representatives and an internal chatbot answering employee questions. Other AI tools being developed include one for drug discovery and another for identifying and mitigating supply chain risks.

Recommended read:
References :

@learn.aisingapore.org //
References: LearnAI , news.mit.edu , techxplore.com ...
MIT researchers have achieved a breakthrough in artificial intelligence, specifically aimed at enhancing the accuracy of AI-generated code. This advancement focuses on guiding large language models (LLMs) to produce outputs that strictly adhere to the rules and structures of various programming languages, preventing common errors that can cause system crashes. The new technique, developed by MIT and collaborators, ensures that the AI's focus remains on generating valid and accurate code by quickly discarding less promising outputs. This approach not only improves code quality but also significantly boosts computational efficiency.

This efficiency gain allows smaller LLMs to perform better than larger models in producing accurate and well-structured outputs across diverse real-world scenarios, including molecular biology and robotics. The new method tackles issues with existing methods which distort the model’s intended meaning or are too time-consuming for complex tasks. Researchers developed a more efficient way to control the outputs of a large language model, guiding it to generate text that adheres to a certain structure, like a programming language, and remains error free.

The implications of this research extend beyond academic circles, potentially revolutionizing programming assistants, AI-driven data analysis, and scientific discovery tools. By enabling non-experts to control AI-generated content, such as business professionals creating complex SQL queries using natural language prompts, this architecture could democratize access to advanced programming and data manipulation. The findings will be presented at the International Conference on Learning Representations.

Recommended read:
References :
  • LearnAI: Making AI-generated code more accurate in any language | MIT News Programmers can now use large language models (LLMs) to generate computer code more quickly. However, this only makes programmers’ lives easier if that code follows the rules of the programming language and doesn’t cause a computer to crash.
  • news.mit.edu: A new technique automatically guides an LLM toward outputs that adhere to the rules of whatever programming language or other format is being used.
  • learn.aisingapore.org: Making AI-generated code more accurate in any language | MIT News
  • techxplore.com: Making AI-generated code more accurate in any language

Ryan Daws@AI News //
OpenAI has secured a massive $40 billion funding round, led by SoftBank, catapulting its valuation to an unprecedented $300 billion. This landmark investment makes OpenAI the world's second-most valuable private company alongside TikTok parent ByteDance Ltd, trailing only Elon Musk's SpaceX Corp. This deal marks one of the largest capital infusions in the tech industry and signifies a major milestone for the company, underscoring the escalating significance of AI.

The fresh infusion of capital is expected to fuel several key initiatives at OpenAI. The funding will support expanded research and development, and upgrades to computational infrastructure. This includes the upcoming release of a new open-weight language model with enhanced reasoning capabilities. OpenAI said the funding round would allow the company to “push the frontiers of AI research even further” and “pave the way” towards AGI, or artificial general intelligence.

Recommended read:
References :
  • Fello AI: OpenAI has closed a $40 billion funding round, boosting its valuation to $300 billion. The deal, led by SoftBank, is one of the largest capital infusions in the tech industry and marks a significant milestone for the company.
  • SiliconANGLE: OpenAI bags $40B in funding, increasing its post-money valuation to $300B. The bumper funding round was led by SoftBank Group Corp. and saw participation from existing backers of OpenAI, including Microsoft Corp., Coatue Management, Thrive Capital and Altimeter Capital.
  • AI News | VentureBeat: In a move that surprised the tech industry Monday, OpenAI said it has secured a monumental $40 billion funding round led by SoftBank, catapulting its valuation to an unprecedented $300 billion -- making it the largest private equity investment on record.
  • www.theguardian.com: OpenAI said it had raised $40bn in a funding round that valued the ChatGPT maker at $300bn – the biggest capital-raising session ever for a startup.

Michael Nuñez@AI News | VentureBeat //
OpenAI, the company behind ChatGPT, has announced a significant strategic shift by planning to release its first open-weight AI model since 2019. This move comes amidst mounting economic pressures from competitors like DeepSeek and Meta, whose open-source models are increasingly gaining traction. CEO Sam Altman revealed the plans on X, stating that the new model will have reasoning capabilities and allow developers to run it on their own hardware, departing from OpenAI's cloud-based subscription model.

This decision marks a notable change for OpenAI, which has historically defended closed, proprietary models. The company is now looking to gather developer feedback to make the new model as useful as possible, planning events in San Francisco, Europe and Asia-Pacific. As models improve, startups and developers increasingly want more tunable latency, and want to use on-prem deplouments requiring full data control, according to OpenAI.

The shift comes alongside a monumental $40 billion funding round led by SoftBank, which has catapulted OpenAI's valuation to $300 billion. SoftBank will initially invest $10 billion, with the remaining $30 billion contingent on OpenAI transitioning to a for-profit structure by the end of the year. This funding will help OpenAI continue building AI systems that drive scientific discovery, enable personalized education, enhance human creativity, and pave the way toward artificial general intelligence. The release of the open-weight model is expected to help OpenAI compete with the growing number of efficient open-source alternatives and counter the criticisms that have come from remaining a closed model.

Recommended read:
References :
  • SiliconANGLE: OpenAI to launch its first ‘open-weights’ model since 2019
  • techxplore.com: In shift, OpenAI announces open AI model
  • venturebeat.com: OpenAI to release open-source model as AI economics force strategic shift
  • THE DECODER: OpenAI plans to release open-weight reasoning LLM without usage restrictions
  • www.theguardian.com: OpenAI raises up to $40bn in record-breaking deal with SoftBank
  • Fello AI: OpenAI Secures Historic $40 Billion Funding Round
  • THE DECODER: SoftBank and OpenAI announced a major partnership on Monday that includes billions in annual spending and a new joint venture focused on the Japanese market.
  • www.techrepublic.com: Developers Wanted: OpenAI Seeks Feedback About Open Model That Will Be Revealed ‘In the Coming Months’
  • SiliconANGLE: OpenAI bags $40B in funding, increasing its post-money valuation to $300B
  • techstrong.ai: OpenAI has secured up to $40 billion in a record new funding round led by SoftBank Group that would give the artificial intelligence (AI) pioneer a whopping $300 billion valuation as it ramps up AI research, infrastructure and tools.
  • WIRED: OpenAI is preparing to launch its first open-source language model in recent years.
  • Charlie Fink: OpenAI Raises $40 Billion, Runway AI Video $380 Million, Amazon, Oracle, TikTok Suitors Await Decision

Ryan Daws@AI News //
OpenAI is set to release its first open-weight language model since 2019, marking a strategic shift for the company. This move comes amidst growing competition in the AI landscape, with rivals like DeepSeek and Meta already offering open-source alternatives. Sam Altman, OpenAI's CEO, announced the upcoming model will feature reasoning capabilities and allow developers to run it on their own hardware, departing from OpenAI's traditional cloud-based approach.

This decision follows OpenAI securing a $40 billion funding round, although reports suggest a potential breakdown of $30 billion from SoftBank and $10 billion from Microsoft and venture capital funds. Despite the fresh funding, OpenAI also faces scrutiny over its training data. A recent study by the AI Disclosures Project suggests that OpenAI's GPT-4o model demonstrates "strong recognition" of copyrighted data, potentially accessed without consent. This raises ethical questions about the sources used to train OpenAI's large language models.

Recommended read:
References :
  • Fello AI: OpenAI Secures Historic $40 Billion Funding Round
  • AI News | VentureBeat: $40B into the furnace: As OpenAI adds a million users an hour, the race for enterprise AI dominance hits a new gear
  • InnovationAus.com: OpenAI has closed a significant $40 billion funding round, led by SoftBank Group, pushing its valuation to $300 billion.
  • Maginative: OpenAI Secures Record $40 Billion in Funding, Reaching $300 Billion Valuation
  • www.theguardian.com: OpenAI said it had raised $40bn in a funding round that valued the ChatGPT maker at $300bn – the biggest capital-raising session ever for a startup.
  • The Verge: OpenAI just raised another $40 billion round led by SoftBank
  • SiliconANGLE: OpenAI bags $40B in funding, increasing its post-money valuation to $300B
  • techxplore.com: OpenAI says it raised $40 bn at valuation of $300 bn
  • THE DECODER: OpenAI nears completion of multi-billion dollar funding round
  • Kyle Wiggers ?: OpenAI raises $40B at $300B post-money valuation
  • THE DECODER: Softbank leads OpenAI's $40 billion funding round
  • Verdict: OpenAI has secured a $40 billion funding round, marking the biggest capital raising ever for a startup, with a $300 billion valuation. The deal is led by SoftBank and backed by leading investors.
  • Crunchbase News: OpenAI secured $40 billion in funding in a record-breaking round led by SoftBank, valuing the company at $300 billion.
  • bsky.app: OpenAI has raised $40 billion at a $300 billion valuation. For context, Boeing has a $128 billion market cap, Disney has a $178 billion market cap, and Chevron has a $295 billion market cap.
  • Pivot to AI: OpenAI signs its $40 billion deal with SoftBank! Or maybe $30 billion, probably
  • TechInformed: OpenAI has raised more than $40 billion in a fundraise with Japanese telco SoftBank and other investors, valuing the ChatGPT company at more than $300bn.… The post appeared first on .
  • CyberInsider: OpenSNP to Shut Down and Delete All User-Submitted DNA Data
  • www.techrepublic.com: OpenAI Secures $40B in Historic Funding Round — But There’s a $10B Catch
  • techstrong.ai: OpenAI has secured up to $40 billion in a record new funding round led by SoftBank Group that would give the artificial intelligence (AI) pioneer a whopping $300 billion valuation as it ramps up AI research, infrastructure and tools.
  • SiliconANGLE: OpenAI to launch its first ‘open-weights’ model since 2019
  • venturebeat.com: OpenAI to release open-source model as AI economics force strategic shift
  • AI News: Study claims OpenAI trains AI models on copyrighted data
  • Charlie Fink: OpenAI raises $40 billion, Runway’s $380 million raise and its stunning Gen-4 AI model, Anthropic warns AI may lie, plus vibe filmmaking with DeepMind.
  • thezvi.wordpress.com: Greetings from Costa Rica! The image fun continues. We Are Going to Need A Bigger Compute Budget Fun is being had by all, now that OpenAI has dropped its rule about not mimicking existing art styles.

Michael Nuñez@AI News | VentureBeat //
OpenAI, the company behind ChatGPT, has announced a significant strategic shift by planning to release its first open-weight AI model since 2019. This move comes amidst mounting economic pressures from competitors like DeepSeek and Meta, whose open-source models are increasingly gaining traction. CEO Sam Altman revealed the plans on X, stating that the new model will have reasoning capabilities and allow developers to run it on their own hardware, departing from OpenAI's cloud-based subscription model.

This decision marks a notable change for OpenAI, which has historically defended closed, proprietary models. The company is now looking to gather developer feedback to make the new model as useful as possible, planning events in San Francisco, Europe and Asia-Pacific. As models improve, startups and developers increasingly want more tunable latency, and want to use on-prem deplouments requiring full data control, according to OpenAI.

The shift comes alongside a monumental $40 billion funding round led by SoftBank, which has catapulted OpenAI's valuation to $300 billion. SoftBank will initially invest $10 billion, with the remaining $30 billion contingent on OpenAI transitioning to a for-profit structure by the end of the year. This funding will help OpenAI continue building AI systems that drive scientific discovery, enable personalized education, enhance human creativity, and pave the way toward artificial general intelligence. The release of the open-weight model is expected to help OpenAI compete with the growing number of efficient open-source alternatives and counter the criticisms that have come from remaining a closed model.

Recommended read:
References :
  • Data Science at Home: Is DeepSeek the next big thing in AI? Can OpenAI keep up? And how do we truly understand these massive LLMs?
  • venturebeat.com: OpenAI to release open-source model as AI economics force strategic shift
  • WIRED: Sam Altman Says OpenAI Will Release an ‘Open Weight’ AI Model This Summer
  • Fello AI: OpenAI has closed a $40 billion funding round, boosting its valuation to $300 billion.
  • www.theguardian.com: OpenAI said it had raised $40bn in a funding round that valued the ChatGPT maker at $300bn.
  • SiliconANGLE: OpenAI to launch its first ‘open-weights’ model since 2019
  • techxplore.com: OpenAI says it raised $40 bn at valuation of $300 bn
  • SiliconANGLE: OpenAI bags $40B in funding, increasing its post-money valuation to $300B
  • techxplore.com: OpenAI says it raised $40 bn at valuation of $300 bn
  • www.tomsguide.com: OpenAI is planning on launching its first open-weight model in years
  • THE DECODER: OpenAI plans to release open-weight reasoning LLM without usage restrictions
  • www.it-daily.net: OpenAI raises 40 billion dollars from investors
  • bsky.app: OpenAI has raised $40 billion at a $300 billion valuation. For context, Boeing has a $128 billion market cap, Disney has a $178 billion market cap, and Chevron has a $295 billion market cap. So, OpenAI has been valued at something like Boeing plus Disney, or just some $5 billion more than Chevron.
  • The Tech Portal: OpenAI has closed a record-breaking $40 billion private funding round, marking the…
  • bdtechtalks.com: Understanding OpenAI’s pivot to releasing open source models
  • techstrong.ai: OpenAI to Raise $40 Billion in Funding, Release Open-Weight Language Model
  • Charlie Fink: OpenAI Raises $40 Billion, Runway AI Video $380 Million, Amazon, Oracle, TikTok Suitors Await Decision
  • Charlie Fink: Runway’s Gen-4 release overshadows OpenAI’s image upgrade as Higgsfield, Udio, Prodia, and Pika debut powerful new AI tools for video, music, and image generation.

Maximilian Schreiner@THE DECODER //
Google has unveiled Gemini 2.5 Pro, its latest and "most intelligent" AI model to date, showcasing significant advancements in reasoning, coding proficiency, and multimodal functionalities. According to Google, these improvements come from combining a significantly enhanced base model with improved post-training techniques. The model is designed to analyze complex information, incorporate contextual nuances, and draw logical conclusions with unprecedented accuracy. Gemini 2.5 Pro is now available for Gemini Advanced users and on Google's AI Studio.

Google emphasizes the model's "thinking" capabilities, achieved through chain-of-thought reasoning, which allows it to break down complex tasks into multiple steps and reason through them before responding. This new model can handle multimodal input from text, audio, images, videos, and large datasets. Additionally, Gemini 2.5 Pro exhibits strong performance in coding tasks, surpassing Gemini 2.0 in specific benchmarks and excelling at creating visually compelling web apps and agentic code applications. The model also achieved 18.8% on Humanity’s Last Exam, demonstrating its ability to handle complex knowledge-based questions.

Recommended read:
References :
  • SiliconANGLE: Google LLC said today it’s updating its flagship Gemini artificial intelligence model family by introducing an experimental Gemini 2.5 Pro version.
  • The Tech Basic: Google's New AI Models “Think” Before Answering, Outperform Rivals
  • AI News | VentureBeat: Google releases ‘most intelligent model to date,’ Gemini 2.5 Pro
  • Analytics Vidhya: We Tried the Google 2.5 Pro Experimental Model and It’s Mind-Blowing!
  • www.tomsguide.com: Google unveils Gemini 2.5 — claims AI breakthrough with enhanced reasoning and multimodal power
  • Google DeepMind Blog: Gemini 2.5: Our most intelligent AI model
  • THE DECODER: Google Deepmind has introduced Gemini 2.5 Pro, which the company describes as its most capable AI model to date. The article appeared first on .
  • intelligence-artificielle.developpez.com: Google DeepMind a lancé Gemini 2.5 Pro, un modèle d'IA qui raisonne avant de répondre, affirmant qu'il est le meilleur sur plusieurs critères de référence en matière de raisonnement et de codage
  • The Tech Portal: Google unveils Gemini 2.5, its most intelligent AI model yet with ‘built-in thinking’
  • Ars OpenForum: Google says the new Gemini 2.5 Pro model is its “smartest†AI yet
  • The Official Google Blog: Gemini 2.5: Our most intelligent AI model
  • www.techradar.com: I pitted Gemini 2.5 Pro against ChatGPT o3-mini to find out which AI reasoning model is best
  • bsky.app: Google's AI comeback is official. Gemini 2.5 Pro Experimental leads in benchmarks for coding, math, science, writing, instruction following, and more, ahead of OpenAI's o3-mini, OpenAI's GPT-4.5, Anthropic's Claude 3.7, xAI's Grok 3, and DeepSeek's R1. The narrative has finally shifted.
  • Shelly Palmer: Google’s Gemini 2.5: AI That Thinks Before It Speaks
  • bdtechtalks.com: Gemini 2.5 Pro is a new reasoning model that excels in long-context tasks and benchmarks, revitalizing Google’s AI strategy against competitors like OpenAI.
  • Interconnects: The end of a busy spring of model improvements and what's next for the presumed leader in AI abilities.
  • www.techradar.com: Gemini 2.5 is now available for Advanced users and it seriously improves Google’s AI reasoning
  • www.zdnet.com: Google releases 'most intelligent' experimental Gemini 2.5 Pro - here's how to try it
  • Unite.AI: Gemini 2.5 Pro is Here—And it Changes the AI Game (Again)
  • TestingCatalog: Gemini 2.5 Pro sets new AI benchmark and launches on AI Studio and Gemini
  • Analytics Vidhya: Google DeepMind's latest AI model, Gemini 2.5 Pro, has reached the #1 position on the Arena leaderboard.
  • AI News: Gemini 2.5: Google cooks up its ‘most intelligent’ AI model to date
  • Fello AI: Google’s Gemini 2.5 Shocks the World: Crushing AI Benchmark Like No Other AI Model!
  • Analytics India Magazine: Google Unveils Gemini 2.5, Crushes OpenAI GPT-4.5, DeepSeek R1, & Claude 3.7 Sonnet
  • Practical Technology: Practical Tech covers the launch of Google's Gemini 2.5 Pro and its new AI benchmark achievements.
  • Shelly Palmer: Google's Gemini 2.5: AI That Thinks Before It Speaks
  • www.producthunt.com: Google's most intelligent AI model
  • Windows Copilot News: Google reveals AI ‘reasoning’ model that ‘explicitly shows its thoughts’
  • AI News | VentureBeat: Hands on with Gemini 2.5 Pro: why it might be the most useful reasoning model yet
  • thezvi.wordpress.com: Gemini 2.5 Pro Experimental is America’s next top large language model. That doesn’t mean it is the best model for everything. In particular, it’s still Gemini, so it still is a proud member of the Fun Police, in terms of …
  • www.computerworld.com: Gemini 2.5 can, among other things, analyze information, draw logical conclusions, take context into account, and make informed decisions.
  • www.infoworld.com: Google introduces Gemini 2.5 reasoning models
  • Maginative: Google's Gemini 2.5 Pro leads AI benchmarks with enhanced reasoning capabilities, positioning it ahead of competing models from OpenAI and others.
  • www.infoq.com: Google's Gemini 2.5 Pro is a powerful new AI model that's quickly becoming a favorite among developers and researchers. It's capable of advanced reasoning and excels in complex tasks.
  • AI News | VentureBeat: Google’s Gemini 2.5 Pro is the smartest model you’re not using – and 4 reasons it matters for enterprise AI
  • Communications of the ACM: Google has released Gemini 2.5 Pro, an updated AI model focused on enhanced reasoning, code generation, and multimodal processing.
  • The Next Web: Google has released Gemini 2.5 Pro, an updated AI model focused on enhanced reasoning, code generation, and multimodal processing.
  • www.tomsguide.com: Gemini 2.5 Pro is now free to all users in surprise move
  • Composio: Google just launched Gemini 2.5 Pro on March 26th, claiming to be the best in coding, reasoning and overall everything. But I The post appeared first on .
  • Composio: Google's Gemini 2.5 Pro, released on March 26th, is being hailed for its enhanced reasoning, coding, and multimodal capabilities.
  • Analytics India Magazine: Gemini 2.5 Pro is better than the Claude 3.7 Sonnet for coding in the Aider Polyglot leaderboard.
  • www.zdnet.com: Gemini's latest model outperforms OpenAI's o3 mini and Anthropic's Claude 3.7 Sonnet on the latest benchmarks. Here's how to try it.
  • www.marketingaiinstitute.com: [The AI Show Episode 142]: ChatGPT’s New Image Generator, Studio Ghibli Craze and Backlash, Gemini 2.5, OpenAI Academy, 4o Updates, Vibe Marketing & xAI Acquires X
  • www.tomsguide.com: Gemini 2.5 is free, but can it beat DeepSeek?
  • www.tomsguide.com: Google Gemini could soon help your kids with their homework — here’s what we know
  • PCWorld: Google’s latest Gemini 2.5 Pro AI model is now free for all users
  • www.techradar.com: Google just made Gemini 2.5 Pro Experimental free for everyone, and that's awesome.
  • Last Week in AI: #205 - Gemini 2.5, ChatGPT Image Gen, Thoughts of LLMs

Tris Warkentin@The Official Google Blog //
Google AI has released Gemma 3, a new family of open-source AI models designed for efficient and on-device AI applications. Gemma 3 models are built with technology similar to Gemini 2.0, intended to run efficiently on a single GPU or TPU. The models are available in various sizes: 1B, 4B, 12B, and 27B parameters, with options for both pre-trained and instruction-tuned variants, allowing users to select the model that best fits their hardware and specific application needs.

Gemma 3 offers practical advantages including efficiency and portability. For example, the 27B version has demonstrated robust performance in evaluations while still being capable of running on a single GPU. The 4B, 12B, and 27B models are capable of processing both text and images, and supports more than 140 languages. The models have a context window of 128,000 tokens, making them well suited for tasks that require processing large amounts of information. Google has built safety protocols into Gemma 3, including a safety checker for images called ShieldGemma 2.

Recommended read:
References :
  • MarkTechPost: Google AI Releases Gemma 3: Lightweight Multimodal Open Models for Efficient and On‑Device AI
  • The Official Google Blog: Introducing Gemma 3: The most capable model you can run on a single GPU or TPU
  • AI News | VentureBeat: Google unveils open source Gemma 3 model with 128k context window
  • AI News: Details on the launch of Gemma 3 open AI models by Google.
  • The Verge: Google calls Gemma 3 the most powerful AI model you can run on one GPU
  • Maginative: Google DeepMind’s Gemma 3 Brings Multimodal AI, 128K Context Window, and More
  • TestingCatalog: Gemma 3 sets new benchmarks for open compact models with top score on LMarena
  • AI & Machine Learning: Announcing Gemma 3 on Vertex AI
  • Analytics Vidhya: Gemma 3 vs DeepSeek-R1: Is Google’s New 27B Model a Tough Competition to the 671B Giant?
  • AI & Machine Learning: How to deploy serverless AI with Gemma 3 on Cloud Run
  • The Tech Portal: Google rolls outs Gemma 3, its latest collection of lightweight AI models
  • eWEEK: Google’s Gemma 3: Does the ‘World’s Best Single-Accelerator Model’ Outperform DeepSeek-V3?
  • The Tech Basic: Gemma 3 by Google: Multilingual AI with Image and Video Analysis
  • Analytics Vidhya: Google’s Gemma 3: Features, Benchmarks, Performance and Implementation
  • www.infoworld.com: Google unveils Gemma 3 multi-modal AI models
  • www.zdnet.com: Google claims Gemma 3 reaches 98% of DeepSeek's accuracy - using only one GPU
  • AIwire: Google unveiled open source Gemma 3, is multimodal, comes in four sizes and can now handle more information and instructions thanks to a larger context window. The post appeared first on .
  • Ars OpenForum: Google’s new Gemma 3 AI model is optimized to run on a single GPU
  • THE DECODER: Google DeepMind has unveiled Gemma 3, a new generation of open AI models designed to deliver high performance with a relatively small footprint, making them suitable for running on individual GPUs or TPUs.
  • Gradient Flow: Gemma 3: What You Need To Know
  • Interconnects: Gemma 3, OLMo 2 32B, and the growing potential of open-source AI
  • OODAloop: Gemma 3, Google's newest lightweight, open-source AI model, is designed for multimodal tasks and efficient deployment on various devices.
  • NVIDIA Technical Blog: Google has released lightweight, multimodal, multilingual models called Gemma 3. The models are designed to run efficiently on phones and laptops.
  • LessWrong: Google DeepMind has unveiled Gemma 3, a new generation of open AI models designed to deliver high performance with a relatively small footprint, making them suitable for running on individual GPUs or TPUs.