Microsoft introduces new Phi-4-reasoning, Phi-4-reasoning-plus and Phi-4-mini-reasoning models

Microsoft has recently introduced new Phi-4 models, a new category of small language models. These reasoning models include- Phi-4-reasoning, Phi-4-reasoning-plus and Phi-4-mini-reasoning models.
One year ago, Microsoft introduced small language models with the release of Phi-3 on Azure AI Foundry.
Now, we're introducing Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning—pushing what's possible with AI. Read the blog: https://t.co/Mnt1dPK14V
— Microsoft Azure (@Azure) May 1, 2025
Microsoft’s new Phi-4 models
Phi-4- reasoning
It is a 14-billion parameter open-weight reasoning model that rivals much larger models on complex reasoning tasks. Trained via supervised fine-tuning of Phi-4 on carefully curated reasoning demonstrations from OpenAI o3-mini, Phi-4 reasoning generates detailed reasoning chains that effectively leverage additional inference-time compute.
Phi-4-reasoning-plus
This model is built upon Phi-4-reasoning capabilities, further trained with reinforcement learning to utilize more inference-time compute, using 1.5x more tokens than Phi-4-reasoning, to deliver higher accuracy.
Both models are revealed to achieve better performance than OpenAI o1-mini and DeepSeek-R1-Distill-Llama-70B at most benchmarks, including mathematical reasoning and Ph.D. level science questions. Phi-4 reasoning models introduce a major improvement over Phi-4, surpassing larger models like DeepSeek-R1-Distill-70B and approaching DeepSeek-R1 across various reasoning and general capabilities, including math, coding, algorithmic problem solving, and planning.
These models are available on Azure AI Foundry and HuggingFace.
Phi-4-mini-reasoning
It is designed to meet the demand for a compact reasoning model. This transformer-based language model is optimized for mathematical reasoning, providing high-quality, step-by-step problem solving in environments with constrained computing or latency. Fine-tuned with synthetic data generated by the DeepSeek-R1 model, Phi-4-mini reasoning balances efficiency with advanced reasoning ability.
It’s ideal for education applications, embedded tutoring, and lightweight deployment on edge or mobile systems, and is trained on over one million diverse math problems spanning multiple levels of difficulty from middle school to Ph.D level. It is available on Azure AI Foundry or HuggingFace.
The Phi-4 reasoning and Phi-4-mini-reasoning models leverage the low-bit optimizations for Phi Silica and will be available to run soon on Copilot+PC NPUs.