@www.marktechpost.com
//
Moonshot AI has unveiled Kimi K2, a groundbreaking open-source AI model designed to challenge proprietary systems from industry leaders like OpenAI and Anthropic. This trillion-parameter Mixture-of-Experts (MoE) model boasts a remarkable focus on long context, sophisticated code generation, advanced reasoning capabilities, and agentic behavior, meaning it can autonomously perform complex, multi-step tasks. Kimi K2 is designed to move beyond simply responding to prompts and instead to actively execute actions, utilizing tools and writing code with minimal human intervention.
Kimi K2 has demonstrated superior performance in key benchmarks, particularly in coding and software engineering tasks. On SWE-bench Verified, a challenging benchmark for software development, Kimi K2 achieved an impressive 65.8% accuracy, surpassing many existing open-source models and rivaling some proprietary ones. Furthermore, in LiveCodeBench, a benchmark designed to simulate realistic coding scenarios, Kimi K2 attained 53.7% accuracy, outperforming GPT-4.1 and DeepSeek-V3. The model's strengths extend to mathematical reasoning, where it scored 97.4% on MATH-500, exceeding GPT-4.1's score of 92.4%. These achievements position Kimi K2 as a powerful, accessible alternative for developers and researchers. The release of Kimi K2 signifies a significant step towards making advanced AI more open and accessible. Moonshot AI is offering two versions of the model: Kimi-K2-Base for researchers and developers seeking customization, and Kimi-K2-Instruct, optimized for chat and agentic applications. The company highlights that Kimi K2's development involved training on over 15.5 trillion tokens and utilizes a custom MuonClip optimizer to ensure stable training at an unprecedented scale. This open-source approach allows the AI community to leverage and build upon this powerful technology, fostering innovation in the development of AI-powered solutions. Recommended read:
References :
Brian Wang@NextBigFuture.com
//
xAI's latest artificial intelligence model, Grok 4, has been unveiled, showcasing significant advancements according to leaked benchmarks. Reports indicate Grok 4 achieved a score of 45% on the Humanity Last Exam when reasoning is applied, a substantial leap that suggests the model could potentially surpass current industry leaders. This development highlights the rapidly intensifying competition within the AI sector and generates considerable excitement among AI enthusiasts and researchers who are anticipating the official release and further performance evaluations.
The release of Grok 4 follows recent controversies surrounding earlier versions of the chatbot, which exhibited problematic behavior, including the dissemination of antisemitic remarks and conspiracy theories. Elon Musk's xAI has issued apologies for these incidents, stating that a recent code update contributed to the offensive outputs. The company has committed to addressing these issues, including making system prompts public to ensure greater transparency and prevent future misconduct. Despite these past challenges, the focus now shifts to Grok 4's promised enhanced capabilities and its potential to set new standards in AI performance. Alongside the base Grok 4 model, xAI has also introduced Grok 4 Heavy, a multi-agent system reportedly capable of achieving a 50% score on the Humanity Last Exam. The company has also announced new subscription plans, including a $300 per month option for the "SuperGrok Heavy" tier. These tiered offerings suggest a strategy to cater to different user needs, from general consumers to power users and developers. The integration of new connectors for platforms like Notion, Slack, and Gmail is also planned, aiming to broaden Grok's utility and seamless integration into users' workflows. Recommended read:
References :
Mark Tyson@tomshardware.com
//
OpenAI has launched O3 PRO for ChatGPT, marking a significant advancement in both performance and cost-efficiency for its reasoning models. This new model, O3-Pro, is now accessible through the OpenAI API and the Pro plan, priced at $200 per month. The company highlights substantial improvements with O3 PRO and has also dropped the price of its previous o3 model by 80%. This strategic move aims to provide users with more powerful and affordable AI capabilities, challenging competitors in the AI model market and expanding the boundaries of reasoning.
The O3-Pro model is set to offer enhanced raw reasoning capabilities, but early reviews suggest mixed results when compared to competing models like Claude 4 Opus and Gemini 2.5 Pro. While some tests indicate that Claude 4 Opus currently excels in prompt following, output quality, and understanding user intentions, Gemini 2.5 Pro is considered the most economical option with a superior price-to-performance ratio. Initial assessments suggest that O3-Pro might not be worth the higher cost unless the user's primary interest lies in research applications. The launch of O3-Pro coincides with other strategic moves by OpenAI, including consolidating its public sector AI products under the "OpenAI for Government" banner, including ChatGPT Gov. OpenAI has also secured a $200 million contract with the U.S. Department of Defense to explore AI applications in administration and security. Despite these advancements, OpenAI is also navigating challenges, such as the planned deprecation of GPT-4.5 Preview in the API, which has caused frustration among developers who relied on the model for their applications and workflows. Recommended read:
References :
@www.marktechpost.com
//
Google has unveiled a new AI model designed to forecast tropical cyclones with improved accuracy. Developed through a collaboration between Google Research and DeepMind, the model is accessible via a newly launched website called Weather Lab. The AI aims to predict both the path and intensity of cyclones days in advance, overcoming limitations present in traditional physics-based weather prediction models. Google claims its algorithm achieves "state-of-the-art accuracy" in forecasting cyclone track and intensity, as well as details like formation, size, and shape.
The AI model was trained using two extensive datasets: one describing the characteristics of nearly 5,000 cyclones from the past 45 years, and another containing millions of weather observations. Internal testing demonstrated the algorithm's ability to accurately predict the paths of recent cyclones, in some cases up to a week in advance. The model can generate 50 possible scenarios, extending forecast capabilities up to 15 days. This breakthrough has already seen adoption by the U.S. National Hurricane Center, which is now using these experimental AI predictions alongside traditional forecasting models in its operational workflow. Google's AI's ability to forecast up to 15 days in advance marks a significant improvement over current models, which typically provide 3-5 day forecasts. Google made the AI accessible through a new website called Weather Lab. The model is available alongside two years' worth of historical forecasts, as well as data from traditional physics-based weather prediction algorithms. According to Google, this could help weather agencies and emergency service experts better anticipate a cyclone’s path and intensity. Recommended read:
References :
Michael Nuñez@AI News | VentureBeat
//
Google has recently rolled out its latest Gemini 2.5 Flash and Pro models on Vertex AI, bringing advanced AI capabilities to enterprises. The release includes the general availability of Gemini 2.5 Flash and Pro, along with a new Flash-Lite model available for testing. These updates aim to provide organizations with the tools needed to build sophisticated and efficient AI solutions.
The Gemini 2.5 Flash model is designed for speed and efficiency, making it suitable for tasks such as large-scale summarization, responsive chat applications, and data extraction. Gemini 2.5 Pro handles complex reasoning, advanced code generation, and multimodal understanding. Additionally, the new Flash-Lite model offers cost-efficient performance for high-volume tasks. These models are now production-ready within Vertex AI, offering the stability and scalability needed for mission-critical applications. Google CEO Sundar Pichai has highlighted the improved performance of the Gemini 2.5 Pro update, particularly in coding, reasoning, science, and math. The update also incorporates feedback to improve the style and structure of responses. The company is also offering Supervised Fine-Tuning (SFT) for Gemini 2.5 Flash, enabling enterprises to tailor the model to their unique data and needs. A new updated Live API with native audio is also in public preview, designed to streamline the development of complex, real-time audio AI systems. Recommended read:
References :
Sana Hassan@MarkTechPost
//
References:
siliconangle.com
, Maginative
Google has recently unveiled significant advancements in artificial intelligence, showcasing its continued leadership in the tech sector. One notable development is an AI model designed for forecasting tropical cyclones. This model, developed through a collaboration between Google Research and DeepMind, is available via the newly launched Weather Lab website. It can predict the path and intensity of hurricanes up to 15 days in advance. The AI system learns from decades of historical storm data, reconstructing past weather conditions from millions of observations and utilizing a specialized database containing key information about storm tracks and intensity.
The tech giant's Weather Lab marks the first time the National Hurricane Center will use experimental AI predictions in its official forecasting workflow. The announcement comes at an opportune time, coinciding with forecasters predicting an above-average Atlantic hurricane season in 2025. This AI model can generate 50 different hurricane scenarios, offering a more comprehensive prediction range than current models, which typically provide forecasts for only 3-5 days. The AI has achieved a 1.5-day improvement in prediction accuracy, equivalent to about a decade's worth of traditional forecasting progress. Furthermore, Google is experiencing exponential growth in AI usage. Google DeepMind noted that Google's AI usage grew 50 times in one year, reaching 500 trillion tokens per month. Logan Kilpatrick from Google DeepMind discussed Google's transformation from a "sleeping giant" to an AI powerhouse, citing superior compute infrastructure, advanced models like Gemini 2.5 Pro, and a deep talent pool in AI research. Recommended read:
References :
Mark Tyson@tomshardware.com
//
OpenAI has recently launched its newest reasoning model, o3-pro, making it available to ChatGPT Pro and Team subscribers, as well as through OpenAI’s API. Enterprise and Edu subscribers will gain access the following week. The company touts o3-pro as a significant upgrade, emphasizing its enhanced capabilities in mathematics, science, and coding, and its improved ability to utilize external tools.
OpenAI has also slashed the price of o3 by 80% and o3-pro by 87%, positioning the model as a more accessible option for developers seeking advanced reasoning capabilities. This price adjustment comes at a time when AI providers are competing more aggressively on both performance and affordability. Experts note that evaluations consistently prefer o3-pro over the standard o3 model across all categories, especially in science, programming, and business tasks. O3-pro utilizes the same underlying architecture as o3, but it’s tuned to be more reliable, especially on complex tasks, with better long-range reasoning. The model supports tools like web browsing, code execution, vision analysis, and memory. While the increased complexity can lead to slower response times, OpenAI suggests that the tradeoff is worthwhile for the most challenging questions "where reliability matters more than speed, and waiting a few minutes is worth the tradeoff.” Recommended read:
References :
Mark Tyson@tomshardware.com
//
OpenAI has launched o3-pro, a new and improved version of its AI model designed to provide more reliable and thoughtful responses, especially for complex tasks. Replacing the o1-pro model, o3-pro is accessible to Pro and Team users within ChatGPT and through the API, marking OpenAI's ongoing effort to refine its AI technology. The focus of this upgrade is to enhance the model’s reasoning capabilities and maintain consistency in generating responses, directly addressing shortcomings found in previous models.
The o3-pro model is designed to handle tasks requiring deep analytical thinking and advanced reasoning. While built upon the same transformer architecture and deep learning techniques as other OpenAI chatbots, o3-pro distinguishes itself with an improved ability to understand context. Some users have noted that o3-pro feels like o3, but is only modestly better in exchange for being slower. Comparisons with other leading models such as Claude 4 Opus and Gemini 2.5 Pro reveal interesting insights. While Claude 4 Opus has been praised for prompt following and understanding user intentions, Gemini 2.5 Pro stands out for its price-to-performance ratio. Early user experiences suggest o3-pro might not always be worth the expense due to its speed, except for research purposes. Some users have suggested that o3-pro hallucinates modestly less, though this is still being debated. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |