OpenAI o4-mini
Version: 2025-04-16
Direct from Azure models
Direct from Azure models are a select portfolio curated for their market-differentiated capabilities:- Secure and managed by Microsoft: Purchase and manage models directly through Azure with a single license, consistent support, and no third-party dependencies, backed by Azure's enterprise-grade infrastructure.
- Streamlined operations: Benefit from unified billing, governance, and seamless PTU portability across models hosted on Azure - all part of Microsoft Foundry.
- Future-ready flexibility: Access the latest models as they become available, and easily test, deploy, or switch between them within Microsoft Foundry; reducing integration effort.
- Cost control and optimization: Scale on demand with pay-as-you-go flexibility or reserve PTUs for predictable performance and savings.
Key capabilities
About this model
o4-mini is the most efficient reasoning model in the o model series, well suited for agentic solutions.Key model capabilities
- Multiple APIs support: o4-mini is available in Responses API and Chat Completions API with Responses API supporting seamless integration with multiple tools and enhanced transparency with the reasoning summary as part of the model output.
- Reasoning summary: In the Responses API, o4-mini now supports reasoning summary in its output providing more insights into its thinking process. This enhances the explainability and the effectiveness of the resulting actions and tools that leverage the insights for even better outcomes.
- Multimodality: With the addition of vision input capabilities, o4-mini expands its reasoning capability to process and analyze visual data, extracting valuable insights and generating comprehensive text outputs. This is supported in both Responses API and Chat Completions API.
- Full tools support includes parallel tools calling: o4-mini is the first efficient reasoning model with full tools support similar to the mainline models including parallel tool calling. Use o4-mini for the next generation of agentic solutions. This capability is supported in both Responses API and Chat Completions API.
Use cases
See Responsible AI for additional considerations for responsible use.Key use cases
The provider has not supplied this information.Out of scope use cases
The provider has not supplied this information.Pricing
Pricing is based on a number of factors, including deployment type and tokens used. See pricing details here.Technical specs
The provider has not supplied this information.Training cut-off date
The provider has not supplied this information.Training time
The provider has not supplied this information.Input formats
With the addition of vision input capabilities, o4-mini expands its reasoning capability to process and analyze visual data, extracting valuable insights and generating comprehensive text outputs.Output formats
In the Responses API, o4-mini now supports reasoning summary in its output providing more insights into its thinking process. This enhances the explainability and the effectiveness of the resulting actions and tools that leverage the insights for even better outcomes.Supported languages
The provider has not supplied this information.Sample JSON response
The provider has not supplied this information.Model architecture
The provider has not supplied this information.Long context
The provider has not supplied this information.Optimizing model performance
The provider has not supplied this information.Additional assets
The following documents are applicable:- Overview of Responsible AI practices for Azure OpenAI models
- Transparency Note for Azure OpenAI Service
Training disclosure
Training, testing and validation
The provider has not supplied this information.Distribution
Distribution channels
This model is provided through the Azure OpenAI Service.More information
OpenAI has incorporated additional safety measures into the o1 models, including new techniques to help the models refuse unsafe requests. These advancements make the o1 series some of the most robust models available. OpenAI measures safety is by testing how well models continue to follow its safety rules if a user tries to bypass them (known as "jailbreaking"). In OpenAI's internal tests, GPT-4o scored 22 (on a scale of 0-100) while o1-preview model scored 84. You can read more about this in the OpenAI's system card and research post .Responsible AI considerations
Safety techniques
OpenAI has incorporated additional safety measures into the o1 models, including new techniques to help the models refuse unsafe requests. These advancements make the o1 series some of the most robust models available.Safety evaluations
OpenAI measures safety is by testing how well models continue to follow its safety rules if a user tries to bypass them (known as "jailbreaking"). In OpenAI's internal tests, GPT-4o scored 22 (on a scale of 0-100) while o1-preview model scored 84. You can read more about this in the OpenAI's system card and research post .Known limitations
The provider has not supplied this information.Acceptable use
Acceptable use policy
The following documents are applicable:Quality and performance evaluations
Source: OpenAI The provider has not supplied this information.Benchmarking methodology
Source: OpenAI OpenAI measures safety is by testing how well models continue to follow its safety rules if a user tries to bypass them (known as "jailbreaking"). In OpenAI's internal tests, GPT-4o scored 22 (on a scale of 0-100) while o1-preview model scored 84. You can read more about this in the OpenAI's system card and research post .Public data summary
Source: OpenAI The provider has not supplied this information.Model Specifications
Context Length200000
Quality Index0.89
LicenseCustom
Training DataMay 2024
Last UpdatedJanuary 2026
Input TypeText,Image
Output TypeText
ProviderOpenAI
Languages1 Language
Related Models