Phi-4-Reasoning
Phi-4-Reasoning
Version: 1
MicrosoftLast updated April 2025
State-of-the-art open-weight reasoning model.
Reasoning
Large context
Low latency
Phi-4-Reasoning is a state-of-the-art open-weight reasoning model finetuned from Phi-4 using supervised fine-tuning on a dataset of chain-of-thought traces and reinforcement learning. The supervised fine-tuning dataset includes a blend of synthetic prompts and high-quality filtered data from public domain websites, focused on math, science, and coding skills as well as alignment data for safety and Responsible AI. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning.

Intended Use

Primary Use Cases

Our model is designed to accelerate research on language models, for use as a building block for generative AI powered features. It provides uses for general purpose AI systems and applications (primarily in English) which require:

1. Memory/compute constrained environments.
2. Latency bound scenarios.
3. Reasoning and logic.

Out-of-Scope Use Cases

This model is designed and tested for math reasoning only. Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case, including the model’s focus on English. Review the Responsible AI Considerations section below for further guidance when choosing a use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.

Data Overview

Training Datasets

Our training data is a mixture of Q&A, chat format data in math, science, and coding. The chat prompts are sourced from filtered high-quality web data and optionally rewritten and processed through a synthetic data generation pipeline. We further include data to improve truthfulness and safety.

Responsible AI Considerations

Like other language models, Phi-4-Reasoning can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
  • Quality of Service: The model is trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. Phi-4-Reasoning is not intended to support multilingual use.
  • Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
  • Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
  • Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
  • Election Information Reliability: The model has an elevated defect rate when responding to election-critical queries, which may result in incorrect or unauthoritative election critical information being presented. We are working to improve the model's performance in this area. Users should verify information related to elections with the election authority in their region.
  • Limited Scope for Code: Majority of Phi-4-Reasoning training data is based in Python and uses common packages such as typing, math, random, collections, datetime, itertools. If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Using safety services like Azure AI Content Safety that have advanced guardrails is highly recommended. Important areas for consideration include:
  • Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
  • High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
  • Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
  • Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
  • Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
We evaluated Phi-4-Reasoning using the open-source Eureka evaluation suite and our own internal benchmarks to understand the model's capabilities. More specifically, we evaluate our model on: Reasoning tasks:
  • AIME 2025, 2024, 2023, and 2022: Math olympiad questions.
  • GPQA-Diamond: Complex, graduate-level science questions.
  • OmniMath: Collection of over 4000 olympiad-level math problems with human annotation.
  • LiveCodeBench: Code generation benchmark gathered from competitive coding contests.
  • 3SAT (3-literal Satisfiability Problem) and TSP (Traveling Salesman Problem): Algorithmic problem solving.
  • BA Calendar: Planning.
  • Maze and SpatialMap: Spatial understanding.
General-purpose benchmarks:
  • Kitab: Information retrieval.
  • IFEval and ArenaHard: Instruction following.
  • PhiBench: Internal benchmark.
  • FlenQA: Impact of prompt length on model performance.
  • HumanEvalPlus: Functional code generation.
  • MMLU-Pro: Popular aggregated dataset for multitask language understanding.
At the high-level overview of the model quality on representative benchmarks. For the tables below, higher numbers indicate better performance:
AIME 24AIME 25OmniMathGPQA-DLiveCodeBench (8/1/24–2/1/25)
Phi-4-reasoning75.362.976.665.853.8
OpenThinker2-32B58.058.064.1
QwQ 32B79.565.859.563.4
EXAONE-Deep-32B72.165.866.159.5
DeepSeek-R1-Distill-70B69.351.563.466.257.5
DeepSeek-R178.770.485.073.062.8
o1-mini63.654.860.053.8
o174.675.367.576.771.0
o3-mini88.078.074.677.769.5
Claude-3.7-Sonnet55.358.754.676.8
Gemini-2.5-Pro92.086.761.184.069.2
Phi-4Phi-4-reasoningo3-miniGPT-4o
FlenQA [3K-token subset]82.097.796.890.8
IFEval Strict62.383.491.581.8
ArenaHard68.173.381.975.6
HumanEvalPlus83.592.994.088.0
MMLUPro71.574.379.473.0
Kitab
No Context - Precision
With Context - Precision
No Context - Recall
With Context - Recall

19.3
88.5
8.2
68.1

23.2
91.5
4.9
74.8

37.9
94.0
4.2
76.1

53.7
84.7
20.3
69.2
Toxigen Discriminative
Toxic category
Neutral category

72.6
90.0

86.7
84.7

85.4
88.7

87.6
85.1
PhiBench 2.2158.270.678.072.4
Overall, Phi-4-Reasoning, with only 14B parameters, performs well across a wide range of reasoning tasks, outperforming significantly larger open-weight models such as DeepSeek-R1 distilled 70B model and approaching the performance levels of full DeepSeek R1 model. We also test the models on multiple new reasoning benchmarks for algorithmic problem solving and planning, including 3SAT, TSP, and BA-Calendar. These new tasks are nominally out-of-domain for the models as the training process did not intentionally target these skills, but the models still show strong generalization to these tasks. Furthermore, when evaluating performance against standard general abilities benchmarks such as instruction following or non-reasoning tasks, we find that our new models improve significantly from Phi-4, despite the post-training being focused on reasoning skills in specific domains.
Model Specifications
Context Length32768
Quality Index0.76
LicenseMit
Training DataApril 2025
Last UpdatedApril 2025
Input TypeText
Output TypeText
PublisherMicrosoft
Languages1 Language