News from the AI & ML world

DeeperML - #reasoningmodel

@bdtechtalks.com //
Alibaba has recently launched Qwen-32B, a new reasoning model, which demonstrates performance levels on par with DeepMind's R1 model. This development signifies a notable achievement in the field of AI, particularly for smaller models. The Qwen team showcased that reinforcement learning on a strong base model can unlock reasoning capabilities for smaller models that enhances their performance to be on par with giant models.

Qwen-32B not only matches but also surpasses models like DeepSeek-R1 and OpenAI's o1-mini across key industry benchmarks, including AIME24, LiveBench, and BFCL. This is significant because Qwen-32B achieves this level of performance with only approximately 5% of the parameters used by DeepSeek-R1, resulting in lower inference costs without compromising on quality or capability. Groq is offering developers the ability to build FAST with Qwen QwQ 32B on GroqCloud™, running the 32B parameter model at ~400 T/s. This model is proving to be very competitive in reasoning benchmarks and is one of the top open source models being used.

The Qwen-32B model was explicitly designed for tool use and adapting its reasoning based on environmental feedback, which is a huge win for AI agents that need to reason, plan, and adapt based on context (outperforms R1 and o1-mini on the Berkeley Function Calling Leaderboard). With these capabilities, Qwen-32B shows that RL on a strong base model can unlock reasoning capabilities for smaller models that enhances their performance to be on par with giant models.

Recommended read:
References :
  • Last Week in AI: LWiAI Podcast #202 - Qwen-32B, Anthropic's $3.5 billion, LLM Cognitive Behaviors
  • Groq: A Guide to Reasoning with Qwen QwQ 32B
  • Last Week in AI: #202 - Qwen-32B, Anthropic's $3.5 billion, LLM Cognitive Behaviors
  • Sebastian Raschka, PhD: This article explores recent research advancements in reasoning-optimized LLMs, with a particular focus on inference-time compute scaling that have emerged since the release of DeepSeek R1.
  • Analytics Vidhya: China is rapidly advancing in AI, releasing models like DeepSeek and Qwen to rival global giants.
  • Last Week in AI: Alibaba’s New QwQ 32B Model is as Good as DeepSeek-R1
  • Maginative: Despite having far fewer parameters, Qwen’s new QwQ-32B model outperforms DeepSeek-R1 and OpenAI’s o1-mini in mathematical benchmarks and scientific reasoning, showcasing the power of reinforcement learning.

@bdtechtalks.com //
References: Groq , Analytics Vidhya , bdtechtalks.com ...
Alibaba's Qwen team has unveiled QwQ-32B, a 32-billion-parameter reasoning model that rivals much larger AI models in problem-solving capabilities. This development highlights the potential of reinforcement learning (RL) in enhancing AI performance. QwQ-32B excels in mathematics, coding, and scientific reasoning tasks, outperforming models like DeepSeek-R1 (671B parameters) and OpenAI's o1-mini, despite its significantly smaller size. Its effectiveness lies in a multi-stage RL training approach, demonstrating the ability of smaller models with scaled reinforcement learning to match or surpass the performance of giant models.

The QwQ-32B is not only competitive in performance but also offers practical advantages. It is available as open-weight under an Apache 2.0 license, allowing businesses to customize and deploy it without restrictions. Additionally, QwQ-32B requires significantly less computational power, running on a single high-end GPU compared to the multi-GPU setups needed for larger models like DeepSeek-R1. This combination of performance, accessibility, and efficiency positions QwQ-32B as a valuable resource for the AI community and enterprises seeking to leverage advanced reasoning capabilities.

Recommended read:
References :
  • Groq: A Guide to Reasoning with Qwen QwQ 32B
  • Analytics Vidhya: Qwen’s QwQ-32B: Small Model with Huge Potential
  • Maginative: Alibaba's Latest AI Model, QwQ-32B, Beats Larger Rivals in Math and Reasoning
  • bdtechtalks.com: Alibaba’s QwQ-32B reasoning model matches DeepSeek-R1, outperforms OpenAI o1-mini
  • Last Week in AI: LWiAI Podcast #202 - Qwen-32B, Anthropic's $3.5 billion, LLM Cognitive Behaviors

Ryan Daws@AI News //
Alibaba's Qwen team has launched QwQ-32B, a 32-billion parameter AI model, designed to rival the performance of much larger models like DeepSeek-R1, which has 671 billion parameters. This new model highlights the effectiveness of scaling Reinforcement Learning (RL) on robust foundation models. QwQ-32B leverages continuous RL scaling to demonstrate significant improvements in areas like mathematical reasoning and coding proficiency.

The Qwen team successfully integrated agent capabilities into the reasoning model, allowing it to think critically, use tools, and adapt its reasoning based on environmental feedback. The model has been evaluated across a range of benchmarks, including AIME24, LiveCodeBench, LiveBench, IFEval, and BFCL, designed to assess its mathematical reasoning, coding proficiency, and general problem-solving capabilities. QwQ-32B is available as open-weight on Hugging Face and on ModelScope under an Apache 2.0 license, allowing for both commercial and research uses.

Recommended read:
References :
  • AI News | VentureBeat: Alibaba's new open source model QwQ-32B matches DeepSeek-R1 with way smaller compute requirements
  • Analytics Vidhya: In the world of large language models (LLMs) there is an assumption that larger models inherently perform better. Qwen has recently introduced its latest model, QwQ-32B, positioning it as a direct competitor to the massive DeepSeek-R1 despite having significantly fewer parameters.
  • AI News: The Qwen team at Alibaba has unveiled QwQ-32B, a 32 billion parameter AI model that demonstrates performance rivalling the much larger DeepSeek-R1. This breakthrough highlights the potential of scaling Reinforcement Learning (RL) on robust foundation models.
  • www.infoworld.com: Alibaba Cloud on Thursday launched QwQ-32B, a compact reasoning model built on its latest large language model (LLM), Qwen2.5-32b, one it says delivers performance comparable to other large cutting edge models, including Chinese rival DeepSeek and OpenAI’s o1, with only 32 billion parameters.
  • THE DECODER: Alibaba's latest AI model demonstrates how reinforcement learning can create efficient systems that match the capabilities of much larger models.
  • bdtechtalks.com: Alibaba’s QwQ-32B reasoning model matches DeepSeek-R1, outperforms OpenAI o1-mini
  • Last Week in AI: Alibaba’s New QwQ 32B Model is as Good as DeepSeek-R1
  • Last Week in AI: LWiAI Podcast #202 - Qwen-32B, Anthropic's $3.5 billion, LLM Cognitive Behaviors
  • Last Week in AI: #202 - Qwen-32B, Anthropic's $3.5 billion, LLM Cognitive Behaviors

@www.theverge.com //
OpenAI launched its latest model, o3-mini, last Friday, marking the debut of the o3 family of models. There are two specialized variants: o3-mini-high, which prioritizes in-depth answers and reasoning, and o3-mini-low, which focuses on faster responses. Benchmarks show o3-mini offers comparable performance to its predecessor, o1, at approximately 15 times cheaper and five times faster, positioning it as the best model based on cost-to-performance ratio currently available, especially with Deepseek r1 being primarily unavailable outside of China.

The o3-mini stands out as the only OpenAI model showcasing a chain of thought, although it is a summarized version of the actual reasoning process. It is also the first reasoning model to have function-calling support. The model has demonstrated superior scores compared to o1 on FrontierMath, Codeforces, and GPQA. OpenAI CEO Sam Altman suggests that merging larger pre-trained models with reasoning capabilities could unlock new scientific knowledge, overcoming the limitations of scaling through pre-training alone.

Recommended read:
References :
  • techcrunch.com: OpenAI launches o3-mini, its latest reasoning model that it says is largely on par with o1 and o1-mini in capability, but runs faster and costs less (Kyle Wiggers/TechCrunch)
  • www.theverge.com: O3-mini will be available to all ChatGPT users, the first time free users of the chatbot will be able to try out OpenAI's reasoning models (Tom Warren/The Verge)
  • www.techmeme.com: O3-mini will be available to all ChatGPT users, the first time free users of the chatbot will be able to try out OpenAI's reasoning models (Tom Warren/The Verge)
  • Techmeme: OpenAI's o3-mini is out today. As with other o-series models it's a slightly difficult one to evaluate …
  • community.openai.com: OpenAI has launched o3-mini, a new reasoning model that offers improved performance over o1 while being significantly cheaper. This model supports various features and is available in the API, Assistants API, and Batch API.
  • simonwillison.net: OpenAI's o3-mini is out today and offers a lower cost and faster performance than GPT-4o and o1, costing $1.10 per 1M input tokens and $4.40 per 1M output tokens.
  • Composio: OpenAI launched its latest model, the o3-mini, last Friday.
  • AI News | VentureBeat: OpenAI is now showing more details of the reasoning process of o3-mini, its latest reasoning model.
  • Analytics Vidhya: OpenAI’s o3- mini is not even a week old and it’s already a favorite amongst ChatGPT users.
  • composio.dev: A blog post comparing OpenAI's o3-mini model to DeepSeek R1.

@www.theverge.com //
OpenAI has recently launched its o3-mini model, the first in their o3 family, showcasing advancements in both speed and reasoning capabilities. The model comes in two variants: o3-mini-high, which prioritizes in-depth reasoning, and o3-mini-low, designed for quicker responses. Benchmarks indicate that o3-mini offers comparable performance to its predecessor, o1, but at a significantly reduced cost, being approximately 15 times cheaper and five times faster. This is especially interesting because o3-mini is cheaper than GPT-4o, despite having a usage limit of 150 messages per hour compared to the unrestricted GPT-4o, showcasing its cost-effectiveness.

OpenAI is also now providing more detailed insights into the reasoning process of o3-mini, addressing criticism regarding transparency and competition from models like DeepSeek-R1. This includes revealing summarized versions of the chain of thought (CoT) used by the model, offering users greater clarity on its reasoning logic. OpenAI CEO Sam Altman believes that merging large language model scaling with reasoning capabilities could lead to "new scientific knowledge," hinting at future advancements beyond current limitations in inventing new algorithms or fields.

Recommended read:
References :
  • techcrunch.com: OpenAI on Friday launched a new AI "reasoning" model, o3-mini, the newest in the company's o family of reasoning models.
  • www.theverge.com: o3-mini should outperform o1 and provide faster, more accurate answers.
  • community.openai.com: Today we’re releasing the latest model in our reasoning series, OpenAI o3-mini, and you can start using it now in the API.
  • Techmeme: OpenAI launches o3-mini, its latest reasoning model that it says is largely on par with o1 and o1-mini in capability, but runs faster and costs less.
  • simonwillison.net: OpenAI's o3-mini costs $1.10 per 1M input tokens and $4.40 per 1M output tokens, cheaper than GPT-4o, which costs $2.50 and $10, and o1, which costs $15 and $60.
  • community.openai.com: This article discusses the release of OpenAI's o3-mini model and its capabilities, including its ability to search the web for data and return what it found.
  • futurism.com: This article discusses the release of OpenAI's o3-mini reasoning model, aiming to improve the performance of large language models (LLMs) by handling complex reasoning tasks. This new model is projected to be an advancement in both performance and cost efficiency.
  • the-decoder.com: This article discusses how OpenAI's o3-mini reasoning model is poised to advance scientific knowledge through the merging of LLM scaling and reasoning capabilities.
  • www.analyticsvidhya.com: This blog post highlights the development and use of OpenAI's reasoning model, focusing on its increased performance and cost-effectiveness compared to previous generations. The emphasis is on its use for handling complex reasoning tasks.
  • AI News | VentureBeat: OpenAI is now showing more details of the reasoning process of o3-mini, its latest reasoning model. The change was announced on OpenAI’s X account and comes as the AI lab is under increased pressure by DeepSeek-R1, a rival open model that fully displays its reasoning tokens.
  • Composio: This article discusses OpenAI's o3-mini model and its performance in reasoning tasks.
  • composio.dev: This article discusses OpenAI's release of the o3-mini model, highlighting its improved speed and efficiency in AI reasoning.
  • THE DECODER: Training larger and larger language models (LLMs) with more and more data hits a wall.
  • Analytics Vidhya: OpenAI’s o3- mini is not even a week old and it’s already a favorite amongst ChatGPT users.
  • slviki.org: OpenAI unveils o3-mini, a faster, more cost-effective reasoning model
  • singularityhub.com: This post talks about improvements in LLMs, focusing on the new o3-mini model from OpenAI.
  • computational-intelligence.blogspot.com: This blog post summarizes various AI-related news stories, including the launch of OpenAI's o3-mini model.
  • www.lemonde.fr: OpenAI's new o3-mini model is designed to be faster and more cost-effective than prior models.