News from the AI & ML world

DeeperML - #qwen3

@felloai.com //
Alibaba has launched Qwen3, a new family of large language models (LLMs), posing a significant challenge to Silicon Valley's AI dominance. Qwen3 is not just an incremental update but a leap forward, demonstrating capabilities that rival leading models from OpenAI, Google, and Meta. This advancement signals China’s growing prowess in AI and its potential to redefine the global tech landscape. Qwen3's strengths lie in reasoning, coding, and multilingual understanding, marking a pivotal moment in China's AI development.

The Qwen3 family includes models of varying sizes to cater to diverse applications. Key features include complex reasoning, mathematical problem-solving, and code generation. The models support 119 languages and are trained on a massive dataset of over 36 trillion tokens. Another innovation is Qwen3’s “hybrid reasoning” approach, enabling models to switch between "fast thinking" for quick responses and "slow thinking" for deeper analysis, enhancing versatility and efficiency. Alibaba has also emphasized the open-source nature of some Qwen3 models, fostering wider adoption and collaborative development in China's AI ecosystem.

Alibaba also introduced ZeroSearch, a method that uses reinforcement learning and simulated documents to teach LLMs retrieval without real-time search. It addresses the challenge of LLMs relying on static datasets, which can become outdated. By training the models to retrieve and incorporate external information, ZeroSearch aims to improve the reliability of LLMs in real-world applications like news, research, and product reviews. This method mitigates the high costs associated with large-scale interactions with live APIs, making it more accessible for academic research and commercial deployment.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • felloai.com: Reports Alibaba’s Qwen3 AI is Here to Challenge Silicon Valley
  • MarkTechPost: Alibaba Uses Reinforcement Learning and Simulated Documents to Teach LLMs Retrieval Without Real-Time Search
  • techcrunch.com: Alibaba unveils Qwen 3, a family of hybrid AI reasoning models.
  • www.marktechpost.com: ZeroSearch from Alibaba Uses Reinforcement Learning and Simulated Documents to Teach LLMs Retrieval Without Real-Time Search
  • THE DECODER: Report on Alibaba's "Web Dev" tool in Qwen which generates full front-end code from just a prompt.
  • Towards AI: Qwen-3 Fine Tuning Made Easy: Create Custom AI Models with Python and Unsloth
  • the-decoder.com: Web Dev in Qwen generates full front-end code from just a prompt
  • www.techradar.com: Alibaba says AI-generating search results could not only reduce reliance on Google's APIs, but cut costs by up to 88%.
  • Fello AI: Just when you thought Silicon Valley had the AI game locked down, Alibaba has unleashed Qwen3, a new generation of AI models so powerful they’re making US tech giants sweat.
Classification:
Alexey Shabanov@TestingCatalog //
Alibaba's Qwen team has launched Qwen3, a new family of open-source large language models (LLMs) designed to compete with leading AI systems. The Qwen3 series includes eight models ranging from 0.6B to 235B parameters, with the larger models employing a Mixture-of-Experts (MoE) architecture for enhanced performance. This comprehensive suite offers options for developers with varied computational resources and application requirements. All the models are released under the Apache 2.0 license, making them suitable for commercial use.

The Qwen3 models boast improved agentic capabilities for tool use and support for 119 languages. The models also feature a unique "hybrid thinking mode" that allows users to dynamically adjust the balance between deep reasoning and faster responses. This is particularly valuable for developers as it facilitates efficient use of computational resources based on task complexity. Training involved a large dataset of 36 trillion tokens and was optimized for reasoning, similar to the Deepseek R1 model.

Benchmarks indicate that Qwen3 rivals top competitors like Deepseek R1 and Gemini Pro in areas like coding, mathematics, and general knowledge. Notably, the smaller Qwen3–30B-A3B MoE model achieves performance comparable to the Qwen3–32B dense model while activating significantly fewer parameters. These models are available on platforms like Hugging Face, ModelScope, and Kaggle, along with support for deployment through frameworks like SGLang and vLLM, and local execution via tools like Ollama and llama.cpp.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • pub.towardsai.net: TAI #150: Qwen3 Impresses as a Robust Open-Source Contender
  • gradientflow.com: Table of Contents Model Architecture and Capabilities What is Qwen 3 and what models are available in the lineup? What are the “Hybrid Thinking Modes†in Qwen 3, and why are they valuable for developers?
  • THE DECODER: An article about Qwen3 series from Alibaba debuts with benchmark results matching top competitors
  • TestingCatalog: Reporting on Alibaba Cloud debuting 235B-parameter Qwen 3 to challenge US model dominance
  • Towards AI: TAI #150: Qwen3 Impresses as a Robust Open-Source Contender
  • www.analyticsvidhya.com: Qwen3 Models: How to Access, Performance, Features, and Applications
  • RunPod Blog: Qwen3 Released: How Does It Stack Up?
  • bdtechtalks.com: Alibaba’s Qwen3: Open-weight LLMs with hybrid thinking | BDTechTalks
  • AI News | VentureBeat: Alibaba launches open source Qwen3 model that surpasses OpenAI o1 and DeepSeek R1
  • the-decoder.com: Qwen3 series from Alibaba debuts with benchmark results matching top competitors
Classification:
Alexey Shabanov@TestingCatalog //
Alibaba Cloud has unveiled Qwen 3, a new generation of large language models (LLMs) boasting 235 billion parameters, poised to challenge the dominance of US-based models. This open-weight family of models includes both dense and Mixture-of-Experts (MoE) architectures, offering developers a range of choices to suit their specific application needs and hardware constraints. The flagship model, Qwen3-235B-A22B, achieves competitive results in benchmark evaluations of coding, math, and general knowledge, positioning it as one of the most powerful publicly available models.

Qwen 3 introduces a unique "thinking mode" that can be toggled for step-by-step reasoning or rapid direct answers. This hybrid reasoning approach, similar to OpenAI's "o" series, allows users to engage a more intensive process for complex queries in fields like science, math, and engineering. The models are trained on a massive dataset of 36 trillion tokens spanning 119 languages, twice the corpus of Qwen 2.5 and enriched with synthetic math and code data. This extensive training equips Qwen 3 with enhanced reasoning, multilingual proficiency, and computational efficiency.

The release of Qwen 3 includes two MoE models and six dense variants, all licensed under Apache-2.0 and downloadable from platforms like Hugging Face, ModelScope, and Kaggle. Deployment guidance points to vLLM and SGLang for servers and to Ollama or llama.cpp for local setups, signaling support for both cloud and edge developers. Community feedback has been positive, with analysts noting that earlier Qwen announcements briefly lifted Alibaba shares, underscoring the strategic weight the company places on open models.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • Gradient Flow: Qwen 3: What You Need to Know
  • AI News | VentureBeat: Alibaba launches open source Qwen3 model that surpasses OpenAI o1 and DeepSeek R1
  • TestingCatalog: Alibaba Cloud debuts 235B-parameter Qwen 3 to challenge US model dominance
  • MarkTechPost: Alibaba Qwen Team Just Released Qwen3
  • Analytics Vidhya: Qwen3 Models: How to Access, Performance, Features, and Applications
  • www.analyticsvidhya.com: Qwen3 Models: How to Access, Performance, Features, and Applications
  • THE DECODER: Qwen3 series from Alibaba debuts with benchmark results matching top competitors
  • www.tomsguide.com: Alibaba is launching its own AI reasoning models to compete with DeepSeek
  • the-decoder.com: Qwen3 series from Alibaba debuts with benchmark results matching top competitors
  • pub.towardsai.net: TAI #150: Qwen3 Impresses as a Robust Open-Source Contender
  • Pandaily: The Mind Behind Qwen3: An Inclusive Interview with Alibaba's Zhou Jingren
  • Towards AI: TAI #150: Qwen3 Impresses as a Robust Open-Source Contender
  • gradientflow.com: Table of Contents Model Architecture and Capabilities What is Qwen 3 and what models are available in the lineup? What are the “Hybrid Thinking Modesâ€� in Qwen 3, and why are they valuable for developers? How does Qwen 3 compare to previous versions and other leading models? What are the advantages of Qwen 3’s Mixture-of-Experts ...
  • bdtechtalks.com: Alibaba's Qwen3 open-weight LLMs combine direct response and chain-of-thought reasoning in a single architecture, and compete withe leading models. The post first appeared on .
  • bdtechtalks.com: Alibaba's Qwen3 open-weight LLMs combine direct response and chain-of-thought reasoning in a single architecture, and compete withe leading models. The post first appeared on .
  • RunPod Blog: Qwen3 Released: How Does It Stack Up?
  • www.computerworld.com: The Qwen3 models, which feature a new hybrid reasoning approach, underscore Alibaba's commitment to open-source AI development.
  • Last Week in AI: OpenAI undoes its glaze-heavy ChatGPT update, Alibaba unveils Qwen 3, a family of ‘hybrid’ AI reasoning models , Baidu ERNIE X1 and 4.5 Turbo boast high performance at low cost
Classification:
  • HashTags: #LLM #opensource #Qwen3
  • Company: Alibaba Cloud
  • Target: US AI Models
  • Product: Qwen 3
  • Feature: thinking mode
  • Type: AI
  • Severity: Informative