OpenAI gpt-oss-120b
Version: 1
Fireworks on Foundry
Models available for use with Fireworks on Foundry deliver optimized, best-in-class performance on the Fireworks Inference Cloud. Fireworks on Foundry is a Preview subject to Azure Preview terms and the following supplemental terms: When you use Fireworks on Foundry, data is shared between Microsoft and Fireworks AI, and different compliance and data handling rules will apply. See documentation for details. Customers are responsible for evaluating whether data sharing between Microsoft and Fireworks is appropriate for their organization’s compliance requirements.Key capabilities
About this model
Welcome to the gpt-oss series, OpenAI's open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases. gpt-oss-120b is used for production, general purpose, high reasoning use-cases that fits into a single H100 GPU.Key model capabilities
- Complex reasoning and structured problem-solving (especially with chain-of-thought)
- Agentic workflows (tool use, web browsing, function calling)
- Production-grade general-purpose tasks (e.g., coding, math, science)
- Adjustable reasoning levels
Use cases
See Responsible AI for additional considerations for responsible use.Key use cases
- Complex reasoning and structured problem-solving
- Agentic workflows with tool use and function calling
- Production-grade coding, math, and science tasks
Out of scope use cases
The provider has not supplied this information.Pricing
Pricing is based on a number of factors, including deployment type and tokens used. See pricing details here.Technical specs
The provider has not supplied this information.Training cut-off date
The provider has not supplied this information.Training time
The provider has not supplied this information.Input formats
TextOutput formats
TextSupported languages
EnglishSample JSON response
The provider has not supplied this information.Model architecture
gpt-oss-120b is an open-weight large language model developed by OpenAI with a Mixture-of-Experts (MoE) architecture.| Property | Value |
|---|---|
| Total Parameters | 117B |
| Activated Parameters | 5.1B |
| Architecture | Mixture-of-Experts (MoE) |
Long context
Maximum context length: 128K tokens.Optimizing model performance
The provider has not supplied this information.Additional assets
The provider has not supplied this information.Training disclosure
Training, testing and validation
The provider has not supplied this information.Distribution
Distribution channels
The provider has not supplied this information.More information
Model Specifications
Context Length131072
LicenseOther
Last UpdatedApril 2026
Input TypeText
Output TypeText
ProviderFireworks
Languages1 Language