Phi-4
Phi-4
Version: 8
MicrosoftLast updated June 2025
Phi-4 14B, a highly capable model for low latency scenarios.
Reasoning
Understanding
Low latency
Phi-4 is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning. Phi-4 underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. For more information, reference the Phi-4 Technical Report .

Model Architecture

Phi-4 is a 14B parameters, dense decoder-only transformer model.

Training Data

Our training data is an extension of the data used for Phi-3 and includes a wide variety of sources from:
  1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code.
  2. Newly

Intended Use

Primary Use Cases

Our model is designed to accelerate research on language models, for use as a building block for generative AI powered features. It provides uses for general purpose AI systems and applications (primarily in English) which require:
  1. Memory/compute constrained environments.
  2. Latency bound scenarios.
  3. Reasoning and logic.

Out-of-Scope Use Cases

Our models is not specifically designed or evaluated for all downstream purposes, thus:
  1. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios.
  2. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case, including the model’s focus on English.
  3. Nothing contained in this Model Card should be interpreted as
We evaluated phi-4 using OpenAI’s SimpleEval and our own internal benchmarks to understand the model’s capabilities, more specifically:
  • MMLU: Popular aggregated dataset for multitask language understanding.
  • MATH: Challenging competition math problems.
  • GPQA: Complex, graduate-level science questions.
  • DROP: Complex comprehension and reasoning.
  • MGSM: Multi-lingual grade-school math.
  • HumanEval: Functional code generation.
  • SimpleQA: Factual responses.
To understand the capabilities, we compare phi-4 with a set of models over OpenAI’s SimpleEval benchmark. At the high-level overview of the model quality on representative benchmarks. For the table below, higher numbers indicate better performance: | Category | Benchmark | phi-4 (14B) | phi-3 (14B) | Qwen 2.5 (14B instruct) | GPT-4o-mini | Llama-3.3 (70B instruct) | Qwen 2.5 (72B instruct) | **GPT
Model Specifications
Context Length16384
LicenseMit
Training DataJune 2024
Last UpdatedJune 2025
Input TypeText
Output TypeText
ProviderMicrosoft
Languages45 Languages