@Google DeepMind Blog
//
Google DeepMind is intensifying its focus on AI governance and security as it ventures further into artificial general intelligence (AGI). The company is exploring AI monitors to regulate hyperintelligent AI models, splitting potential threats into four categories, with the creation of a "monitor" AI being one proposed solution. This proactive approach includes prioritizing technical safety, conducting thorough risk assessments, and fostering collaboration within the broader AI community to navigate the development of AGI responsibly.
DeepMind's reported clampdown on sharing research will stifle AI innovation, warns the CEO of Iris.ai, one of Europe’s leading startups in the space, Anita Schjøll Abildgaard. Concerns are rising within the AI community that DeepMind's new research restrictions threaten AI innovation. The CEO of Iris.ai, a Norwegian startup developing an AI-powered engine for science, warns the drawbacks will far outweigh the benefits. She fears DeepMind's restrictions will hinder technological advances. Recommended read:
References :
Ryan Daws@AI News
//
OpenAI is set to release its first open-weight language model since 2019, marking a strategic shift for the company. This move comes amidst growing competition in the AI landscape, with rivals like DeepSeek and Meta already offering open-source alternatives. Sam Altman, OpenAI's CEO, announced the upcoming model will feature reasoning capabilities and allow developers to run it on their own hardware, departing from OpenAI's traditional cloud-based approach.
This decision follows OpenAI securing a $40 billion funding round, although reports suggest a potential breakdown of $30 billion from SoftBank and $10 billion from Microsoft and venture capital funds. Despite the fresh funding, OpenAI also faces scrutiny over its training data. A recent study by the AI Disclosures Project suggests that OpenAI's GPT-4o model demonstrates "strong recognition" of copyrighted data, potentially accessed without consent. This raises ethical questions about the sources used to train OpenAI's large language models. Recommended read:
References :
Michael Nuñez@AI News | VentureBeat
//
OpenAI, the company behind ChatGPT, has announced a significant strategic shift by planning to release its first open-weight AI model since 2019. This move comes amidst mounting economic pressures from competitors like DeepSeek and Meta, whose open-source models are increasingly gaining traction. CEO Sam Altman revealed the plans on X, stating that the new model will have reasoning capabilities and allow developers to run it on their own hardware, departing from OpenAI's cloud-based subscription model.
This decision marks a notable change for OpenAI, which has historically defended closed, proprietary models. The company is now looking to gather developer feedback to make the new model as useful as possible, planning events in San Francisco, Europe and Asia-Pacific. As models improve, startups and developers increasingly want more tunable latency, and want to use on-prem deplouments requiring full data control, according to OpenAI. The shift comes alongside a monumental $40 billion funding round led by SoftBank, which has catapulted OpenAI's valuation to $300 billion. SoftBank will initially invest $10 billion, with the remaining $30 billion contingent on OpenAI transitioning to a for-profit structure by the end of the year. This funding will help OpenAI continue building AI systems that drive scientific discovery, enable personalized education, enhance human creativity, and pave the way toward artificial general intelligence. The release of the open-weight model is expected to help OpenAI compete with the growing number of efficient open-source alternatives and counter the criticisms that have come from remaining a closed model. Recommended read:
References :
Ellie Ramirez-Camara@Data Phoenix
//
The ARC Prize Foundation has launched ARC-AGI-2, a new AI benchmark designed to challenge current foundation models and track progress towards artificial general intelligence (AGI). Building on the original ARC benchmark, ARC-AGI-2 blocks brute force techniques and introduces new tasks intended for next-generation AI systems. The goal is to evaluate real progress toward AGI by requiring models to reason abstractly, generalize from few examples, and apply knowledge in new contexts, tasks that are simple for humans but difficult for machines.
The Foundation has also announced the ARC Prize 2025, a competition running from March 26 to November 3, with a grand prize of $700,000 for a solution achieving an 85% score on the ARC-AGI-2 benchmark's private evaluation dataset. Early testing results show that even OpenAI's top models experienced a significant performance drop, with o3 falling from 75% to approximately 4% on ARC-AGI-2. This highlights how the new benchmark significantly raises the bar for AI tests, measuring general fluid intelligence rather than memorized skills. Recommended read:
References :
Chris McKay@Maginative
//
Google co-founder Sergey Brin is pushing the company's AI team to work 60-hour weeks in an effort to accelerate the development of artificial general intelligence (AGI). Brin believes that the intense competition in the AGI race, particularly from companies like OpenAI, Meta, and Anthropic, requires a "turbocharge" of Google's efforts. He emphasized that the increasing competition in the AGI race, particularly from companies like OpenAI, Meta, and Anthropic.
Brin's call for increased working hours and a return to the office comes as Google faces pressure to maintain its leadership position in the rapidly evolving AI landscape. In an internal memo, he stated that 60 hours a week is the "sweet spot of productivity," urging engineers to dedicate more time to building AI models. This push highlights the growing importance of AGI within the tech industry and the pressure Google faces to stay ahead. Recommended read:
References :
@oodaloop.com
//
Alibaba's CEO, Eddie Wu, has announced that the company's primary objective is now the pursuit of Artificial General Intelligence (AGI). This strategic shift was communicated to investors during an earnings call where Wu emphasized that Alibaba aims to develop models that "extend the boundaries of intelligence." The decision highlights the increasing influence of AI within the technology sector, particularly in the context of Alibaba's traditional e-commerce business, which includes services like AliExpress and Taobao.
This focus on AGI, a type of AI that matches or surpasses human cognitive capabilities, signifies Alibaba's commitment to innovation in the rapidly evolving AI landscape. Wu believes that if AGI is achieved, the AI industry could potentially become the world's largest. The company has already been active in AI development with its Qwen large language models, with the latest version unveiled in January. Alibaba's revenue has seen an 8% year-over-year increase, marking progress in its AI-driven strategies. Recommended read:
References :
@www.artificialintelligence-news.com
//
References:
www.artificialintelligence-new
, AI News
,
DeepSeek, a Chinese AI startup focused on artificial general intelligence (AGI), has announced plans to open-source five repositories next week. This move, according to the company, is a commitment to transparency and community-driven innovation. The repositories are said to include the fundamental building blocks of DeepSeek’s online service. The company believes that by sharing its tools, it can contribute to broader AI research, fostering collaboration and accelerating progress in the field.
This announcement comes amid growing scrutiny and controversy surrounding DeepSeek. The company has faced allegations of data misuse and geopolitical entanglements, drawing comparisons to the challenges faced by TikTok. US lawmakers are reportedly pushing for a ban on DeepSeek due to concerns about data transfers to a banned state-owned company, and Microsoft and OpenAI have launched a probe into a potential system breach allegedly linked to DeepSeek. Despite these challenges, DeepSeek's commitment to open-source technology is viewed by some as a strategic move to address criticism and demonstrate its intentions. Recommended read:
References :
Carl Franzen@AI News | VentureBeat
//
OpenAI CEO Sam Altman has recently shared his predictions about the advancements expected in the field of AI, specifically regarding GPT-5 and the next two years of AI development. During a panel discussion, Altman stated that the progress from February 2025 to February 2027 will be even more impressive than the advancements of the last two years. He expressed strong confidence in AI's potential to accelerate scientific discovery, predicting AI systems will compress 10 years of scientific progress into a single year, potentially leading to breakthroughs in climate change and disease treatment.
Altman also made a point of explicitly referencing GPT-5 and its capabilities. Altman posted on X that OpenAI is working toward a "magic unified intelligence," a single reasoning engine, rather than multiple AI models. No more choosing between GPT-4, GPT-4o, o3-mini, or any other variant. One model to rule them all. In a recent blog post, Altman outlined three observations about the economics of AI, and warned that AI could lead to economic inequality. He suggested exploring ideas like giving everyone a 'compute budget' to use AI or relentlessly driving down the cost of intelligence. Recommended read:
References :
@lemonde.fr
//
References:
www.lemonde.fr
OpenAI CEO Sam Altman recently highlighted the economics of artificial intelligence, particularly Artificial General Intelligence (AGI), emphasizing its potential to benefit all of humanity. He noted the correlation between a model's intelligence and the resources invested in it. This includes training compute, data, and inference compute, observing that continuous gains can be achieved with increased spending in these areas, following predictable scaling laws.
Altman also pointed out the rapid cost reduction in using AI, estimating a 10x decrease every 12 months. This price drop leads to increased adoption. He emphasized the super-exponential socioeconomic value derived from linearly increasing AI intelligence. Altman acknowledges AI might increase inequality. He is advocating for solutions like providing a "compute budget" to everyone, and relentlessly driving down the cost of intelligence. Recommended read:
References :
Alex Kantrowitz@Big Technology
//
Google DeepMind CEO Demis Hassabis has stated that Artificial General Intelligence (AGI) is likely still 3-5 years away. In a recent interview, Hassabis discussed the progress in AI and noted that while current models are capable, they still lack several attributes for AGI. These include robust reasoning, hierarchical planning, and long-term memory. He also highlighted the need for consistent performance across all cognitive tasks, pointing out that current systems are inconsistent, strong in some areas but weak in others. A key benchmark he identified for AGI was the ability to generate its own hypotheses and conjectures about science.
Google is actively exploring various applications of AI. One area of focus is integrating Google's AI assistants into smart glasses, and the company has also been making advancements with its Gemini AI model. The recently launched Samsung Galaxy S25 now supports the multimodal Gemini Nano AI, which powers image descriptions in its TalkBack accessibility app. This marks the first time a non-Google app is using this AI model. Google also continues to invest in other AI companies, recently adding another $1 billion to Anthropic, bringing its total investment to $3 billion. Recommended read:
References :
Brian Wang@NextBigFuture.com
//
References:
e-Discovery Team
, the-decoder.com
,
Sam Altman, CEO of OpenAI, has adjusted his expectations for the timeline of Artificial General Intelligence (AGI). In his 2025 essay "Reflections," Altman suggests that AI takeoff, the point at which AI systems rapidly improve and potentially achieve AGI, is more likely to occur within a few years, rather than months or a decade. This indicates a shift from his earlier perspective, expressing increased confidence in the potential for rapid advancements in AI technology.
He also attempted to lower expectations of OpenAI's next moves. This comes amidst much vague AI hype, where he noted on Twitter that the hype is out of control. There are many unknowns and many things that are still not understood. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |