@felloai.com
//
Alibaba has launched Qwen3, a new family of large language models (LLMs), posing a significant challenge to Silicon Valley's AI dominance. Qwen3 is not just an incremental update but a leap forward, demonstrating capabilities that rival leading models from OpenAI, Google, and Meta. This advancement signals China’s growing prowess in AI and its potential to redefine the global tech landscape. Qwen3's strengths lie in reasoning, coding, and multilingual understanding, marking a pivotal moment in China's AI development.
The Qwen3 family includes models of varying sizes to cater to diverse applications. Key features include complex reasoning, mathematical problem-solving, and code generation. The models support 119 languages and are trained on a massive dataset of over 36 trillion tokens. Another innovation is Qwen3’s “hybrid reasoning” approach, enabling models to switch between "fast thinking" for quick responses and "slow thinking" for deeper analysis, enhancing versatility and efficiency. Alibaba has also emphasized the open-source nature of some Qwen3 models, fostering wider adoption and collaborative development in China's AI ecosystem. Alibaba also introduced ZeroSearch, a method that uses reinforcement learning and simulated documents to teach LLMs retrieval without real-time search. It addresses the challenge of LLMs relying on static datasets, which can become outdated. By training the models to retrieve and incorporate external information, ZeroSearch aims to improve the reliability of LLMs in real-world applications like news, research, and product reviews. This method mitigates the high costs associated with large-scale interactions with live APIs, making it more accessible for academic research and commercial deployment. References :
Classification:
Alexey Shabanov@TestingCatalog
//
Alibaba's Qwen team has launched Qwen3, a new family of open-source large language models (LLMs) designed to compete with leading AI systems. The Qwen3 series includes eight models ranging from 0.6B to 235B parameters, with the larger models employing a Mixture-of-Experts (MoE) architecture for enhanced performance. This comprehensive suite offers options for developers with varied computational resources and application requirements. All the models are released under the Apache 2.0 license, making them suitable for commercial use.
The Qwen3 models boast improved agentic capabilities for tool use and support for 119 languages. The models also feature a unique "hybrid thinking mode" that allows users to dynamically adjust the balance between deep reasoning and faster responses. This is particularly valuable for developers as it facilitates efficient use of computational resources based on task complexity. Training involved a large dataset of 36 trillion tokens and was optimized for reasoning, similar to the Deepseek R1 model. Benchmarks indicate that Qwen3 rivals top competitors like Deepseek R1 and Gemini Pro in areas like coding, mathematics, and general knowledge. Notably, the smaller Qwen3–30B-A3B MoE model achieves performance comparable to the Qwen3–32B dense model while activating significantly fewer parameters. These models are available on platforms like Hugging Face, ModelScope, and Kaggle, along with support for deployment through frameworks like SGLang and vLLM, and local execution via tools like Ollama and llama.cpp. References :
Classification:
Alexey Shabanov@TestingCatalog
//
Alibaba Cloud has unveiled Qwen 3, a new generation of large language models (LLMs) boasting 235 billion parameters, poised to challenge the dominance of US-based models. This open-weight family of models includes both dense and Mixture-of-Experts (MoE) architectures, offering developers a range of choices to suit their specific application needs and hardware constraints. The flagship model, Qwen3-235B-A22B, achieves competitive results in benchmark evaluations of coding, math, and general knowledge, positioning it as one of the most powerful publicly available models.
Qwen 3 introduces a unique "thinking mode" that can be toggled for step-by-step reasoning or rapid direct answers. This hybrid reasoning approach, similar to OpenAI's "o" series, allows users to engage a more intensive process for complex queries in fields like science, math, and engineering. The models are trained on a massive dataset of 36 trillion tokens spanning 119 languages, twice the corpus of Qwen 2.5 and enriched with synthetic math and code data. This extensive training equips Qwen 3 with enhanced reasoning, multilingual proficiency, and computational efficiency. The release of Qwen 3 includes two MoE models and six dense variants, all licensed under Apache-2.0 and downloadable from platforms like Hugging Face, ModelScope, and Kaggle. Deployment guidance points to vLLM and SGLang for servers and to Ollama or llama.cpp for local setups, signaling support for both cloud and edge developers. Community feedback has been positive, with analysts noting that earlier Qwen announcements briefly lifted Alibaba shares, underscoring the strategic weight the company places on open models. References :
Classification:
|
BenchmarksBlogsResearch Tools |