News from the AI & ML world

DeeperML

Carl Franzen@AI News | VentureBeat //
Microsoft has recently launched its Phi-4 reasoning models, marking a significant stride in the realm of small language models (SLMs). This expansion of the Phi series includes three new variants: Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning, designed to excel in advanced reasoning tasks like mathematics and coding. The company's new models are optimized for complex problems, and can handle complex problems through structured reasoning and internal reflection, while remaining lightweight enough to run on lower-end hardware, including mobile devices.

Microsoft asserts that these models demonstrate that smaller AI can achieve impressive results, rivaling much larger models while operating efficiently on devices with limited resources. CEO Satya Nadella says Microsoft's AI model performance is "doubling every 6 months" due to pre-training, inference, and system design. The Phi-4-reasoning model contains 14 billion parameters and was trained via supervised fine-tuning using reasoning paths from OpenAI's o3-mini. A more advanced version, Phi-4-reasoning-plus, adds reinforcement learning and processes 1.5 times more tokens than the base model.

These new models leverage distillation, reinforcement learning, and high-quality data to achieve their performance. In a demonstration, the Phi-4-reasoning model correctly solved a wordplay riddle by recognizing patterns and applying local reasoning, showcasing its ability to identify patterns, understand riddles, and perform mathematical operations. Despite having just 14 billion parameters, the Phi-4 reasoning models match or outperform significantly larger systems, including the 70B parameter DeepSeek-R1-Distill-Llama. On the AIME-2025 benchmark, the Phi models also surpass DeepSeek-R1, which has 671 billion parameters.
Original img attribution: https://venturebeat.com/wp-content/uploads/2025/05/cfr0z3n_minimalist_2D_flat_vector_style_small_robot_looking_u_f27d3fd8-4e61-47b3-9c41-cf963278731b_3_b1cc86.png?w=1024?w=1200&strip=all
ImgSrc: venturebeat.com

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • Ken Yeung: Microsoft is doubling down on small language models with new Phi-4 variants that aim to prove a bold idea: small AI can think big.
  • www.windowscentral.com: Microsoft just launched expanded small language models (SLMs) based on its own Phi-4 AI.
  • THE DECODER: Microsoft is expanding its Phi series of compact language models with three new variants designed for advanced reasoning tasks.
  • the-decoder.com: Microsoft's Phi 4 responds to a simple "Hi" with 56 thoughts
  • Data Phoenix: Microsoft launches Phi-4 'reasoning' models to celebrate Phi-3's first anniversary
Classification: