Top Mathematics discussions

NishMath

Matthias Bastian@THE DECODER //
Microsoft has launched three new additions to its Phi series of compact language models: Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning. These models are designed to excel in complex reasoning tasks, including mathematical problem-solving, algorithmic planning, and coding, demonstrating that smaller AI models can achieve significant performance. The models are optimized to handle complex problems through structured reasoning and internal reflection, while also being efficient enough to run on lower-end hardware, including mobile devices, making advanced AI accessible on resource-limited devices.

Phi-4-reasoning, a 14-billion parameter model, was trained using supervised fine-tuning with reasoning paths from OpenAI's o3-mini. Phi-4-reasoning-plus enhances this with reinforcement learning and processes more tokens, leading to higher accuracy, although with increased computational cost. Notably, these models outperform larger systems, such as the 70B parameter DeepSeek-R1-Distill-Llama, and even surpass DeepSeek-R1 with 671 billion parameters on the AIME-2025 benchmark, a qualifier for the U.S. Mathematical Olympiad, highlighting the effectiveness of Microsoft's approach to efficient, high-performing AI.

The Phi-4 reasoning models show strong results in programming, algorithmic problem-solving, and planning tasks, with improvements in logical reasoning positively impacting general capabilities such as following prompts and answering questions based on long-form content. Microsoft employed a data-centric training strategy, using structured reasoning outputs marked with special tokens to guide the model's intermediate reasoning steps. The open-weight models have been released with transparent training details and are hosted on Hugging Face, allowing for public access, fine-tuning, and use in various applications under a permissive MIT license.
Original img attribution: https://the-decoder.com/wp-content/uploads/2025/05/microsoft_logo_patterns-1.png
ImgSrc: the-decoder.com

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • THE DECODER: Microsoft is expanding its Phi series of compact language models with three new variants designed for advanced reasoning tasks.
  • Ken Yeung: Microsoft’s New Phi-4 Variants Show Just How Far Small AI Can Go
  • AI News | VentureBeat: Microsoft Research has announced the release of Phi-4-reasoning-plus, an open-weight language model built for tasks requiring deep, structured reasoning.
  • Analytics Vidhya: Microsoft isn’t like OpenAI, Google, and Meta; especially not when it comes to large language models.
  • MarkTechPost: Despite notable advancements in large language models (LLMs), effective performance on reasoning-intensive tasks—such as mathematical problem solving, algorithmic planning, or coding—remains constrained by model size, training methodology, and inference-time capabilities.
  • the-decoder.com: Microsoft's Phi-4-reasoning models outperform larger models and run on your laptop or phone
  • www.tomsguide.com: Microsoft just unveiled new Phi-4 reasoning AI models — here's why they're a big deal
  • www.marktechpost.com: Despite notable advancements in large language models (LLMs), effective performance on reasoning-intensive tasks—such as mathematical problem solving, algorithmic planning, or coding—remains constrained by model size, training methodology, and inference-time capabilities.
  • www.windowscentral.com: Microsoft just launched expanded small language models (SLMs) based on its own Phi-4 AI.
  • simonwillison.net: This article discusses Microsoft's phi4-reasoning model, which generates 56 sentences of reasoning output in response to a simple prompt.
Classification: