NVIDIA Nemotron 3 Super NIM microservice
Version: 1
Nemotron-3-Super-120B-A12B is a large language model (LLM) trained by NVIDIA, designed to deliver strong agentic, reasoning, and conversational capabilities. It is optimized for collaborative agents and high-volume workloads such as IT ticket automation. Like other models in the family, it responds to user queries and tasks by first generating a reasoning trace and then concluding with a final response. The model's reasoning capabilities can be configured through a flag in the chat template.
The model employs a hybrid Latent Mixture-of-Experts (LatentMoE) architecture, utilizing interleaved Mamba-2 and MoE layers, along with select Attention layers. Distinct from the Nano model, the Super model incorporates Multi-Token Prediction (MTP) layers for faster text generation and improved quality, and it is trained using NVFP4 quantization to maximize compute efficiency. The model has 12B active parameters and 120B parameters in total.
The supported languages include: English, French, German, Italian, Japanese, Spanish, and Chinese
This model is ready for commercial use.
What is Nemotron?
NVIDIA Nemotron™ is a family of open models with open weights, training data, and recipes, delivering leading efficiency and accuracy for building specialized AI agents.Model Overview
Model Developer: NVIDIA Corporation Model Dates: December 2025 - March 2026 Data Freshness:- The post-training data has a cutoff date of February 2026.
- The pre-training data has a cutoff date of December 2025.
Input
- Input Type(s): Text
- Input Format(s): String
- Input Parameters: One-Dimensional (1D): Sequences
- Other Properties Related to Input: Maximum context length up to 1M tokens. Supported languages include: English, French, German, Italian, Japanese, Spanish, and Chinese
Output
- Output Type(s): Text
- Output Format: String
- Output Parameters: One-Dimensional (1D): Sequences
- Other Properties Related to Output: Maximum context length up to 1M tokens
Example Curl Request
#!/bin/bash
curl -X 'POST' \
'<ENDPOINT_URL>/v1/chat/completions' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer <API_KEY>" \
-d '{
"messages": [
{
"role": "user",
"content": "Write a limerick about the wonders of GPU computing."
}
],
"max_tokens": 256
}'
Use Case
NVIDIA-Nemotron-3-Super-120B-A12B-BF16 is a general purpose reasoning and chat model intended to be used in English, Code, and supported multilingual contexts. This model is optimized for collaborative agents and high-volume workloads. It is intended to be used by developers designing AI Agent systems, chatbots, RAG systems, and other AI-powered applications. This model is also suitable for complex instruction-following tasks and long-context reasoning.Release Date
NGC - 03/11/2026Hugging Face - 03/11/2026
Reference(s)
- NVIDIA Nemotron 3 model family on Hugging Face
- NVIDIA Nemotron 2 model family on Hugging Face
- NVIDIA Nemotron 3 White Paper
Model Architecture
- Architecture Type: Mamba2-Transformer Hybrid Latent Mixture of Experts (LatentMoE) with Multi-Token Prediction (MTP)
- Network Architecture: Nemotron Hybrid LatentMoE
- Number of model parameters: 120B Total / 12B Active
Model Design
The model utilizes the LatentMoE architecture, where tokens are projected into a smaller latent dimension for expert routing and computation, improving accuracy per byte. The Super model is trained using NVFP4 (weight, activation, and gradient tensors are quantized to NVFP4) to maximize throughput on supported hardware. The model includes Multi-Token Prediction (MTP) layers, which predict multiple future tokens to provide richer training signals and enable faster inference via speculative decoding.Training Methodology
Stage 1: Pre-Training- NVIDIA-Nemotron-3-Super-120B-A12B-Base model was pre-trained for over 25T tokens using crawled and synthetic code, math, science, and general knowledge data. Training leveraged NVFP4 quantization for efficiency. All datasets are disclosed in the Training and Evaluation Datasets section of this document. Major portions of the pre-training corpus are released in the Nemotron-Pre-Training-Datasets collection.
- Software used for pre-training: Megatron-LM
- The model was further fine-tuned on synthetic code, math, science, tool calling, instruction following, structured outputs, and general knowledge data. This stage incorporated data designed to support long-range retrieval and multi-document aggregation. All datasets are disclosed in the Training and Evaluation Datasets section of this document. Major portions of the fine-tuning corpus are released in the Nemotron-Post-Training-v3 collection. Data Designer is one of the libraries used to prepare these corpora.
- The model underwent multi-environment reinforcement learning using synchronous GRPO (Group Relative Policy Optimization) across math, code, science, instruction following, multi-step tool use, multi-turn conversations, and structured output environments. It utilized an asynchronous RL architecture that decouples training from inference and leverages MTP to accelerate rollout generation. Conversational quality was further refined through RLHF. All datasets are disclosed in the Training and Evaluation Datasets section of this document. The RL environments and datasets are released as part of NeMo Gym .
- Software used for reinforcement learning: NeMo RL , NeMo Gym
Software Integration
- Runtime Engine(s): NeMo 25.11.01
- Supported Hardware Microarchitecture Compatibility: NVIDIA Ampere; NVIDIA Blackwell; NVIDIA Hopper
- Operating System(s): Linux The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
Training
Data Modality: TextThe total size: 15,573,172,908,990 Tokens
Total number of datasets: 153
Dataset partition: Training [100%], testing [0%], validation [0%]
Time period for training data collection: 2013 to February 24, 2026
Time period for testing data collection: 2013 to February 24, 2026
Time period for validation data collection: 2013 to February 24, 2026
Data Collection Method by dataset: Hybrid: Automated, Human, Synthetic
Labeling Method by dataset: Hybrid: Automated, Human, Synthetic NVIDIA-Nemotron-3-Super-120B-A12B-BF16 is pre-trained on a large corpus of high-quality curated and synthetically-generated data. It is trained in the English language, as well as 19 other languages and 43 programming languages. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. We also include a small portion of question-answering, and alignment style data to improve model accuracy. The model was trained for approximately 25 trillion tokens. The post-training corpus for NVIDIA-Nemotron-3-Super-120B-A12B-BF16 of high-quality curated and synthetically-generated data. Primary languages used for post-training include English, French, German, Italian, Japanese, Spanish, and Chinese. These datasets, such as FinePDFs, EssentialWeb, HotpotQA, SQuAD, and HelpSteer3, do not collectively or exhaustively represent all demographic groups (and proportionally therein). For instance, these datasets do not contain explicit mentions of demographic classes such as age, gender, or ethnicity in 64-99% of samples, depending on the source. In the subset where such terms are present, document-based datasets (FinePDFs and EssentialWeb) contain representational skews, such as references to "male" outnumbering those to "female", and mentions of "White" as the most frequent among ethnic identifiers (comprising 43-44% of ethnicity mentions). To mitigate these imbalances, we recommend considering evaluation techniques such as bias audits, fine-tuning with demographically balanced datasets, and mitigation strategies like counterfactual data augmentation to align with the desired model behavior. This evaluation used a 3,000-sample subset per dataset, identified as the optimal threshold for maximizing embedder accuracy. During post-training, we generate synthetic data by distilling trajectories, solutions, and translations from strong teacher models and agent systems, often grounded in real tasks or documents and aggressively filtered for quality. For math, code, and science, we start from curated problem sets and use open source permissive models such as GPT-OSS-120B to produce step-by-step reasoning traces, candidate solutions, best-of-n selection traces, and verified CUDA kernels. For long-context and science, we build synthetic QA and reasoning data by retrieving passages from long documents, generating MCQ/OpenQA questions and answers, and paraphrasing them into multiple prompt/response formats to ensure diversity. Across all pipelines we stack automated verification—compilers, numerical checks, language identification—to ensure our data is high quality. For all domains, we apply a unified data filtering pipeline to ensure that only high-quality, license-compliant, and verifiable samples are used for post-training. We first discard malformed examples using structural checks (e.g., missing tool definitions when tool calls are present). We then aggressively filter reasoning traces exhibiting pathological repetition, such as repeated n-grams within a sliding window or across the entire trajectory, which we found to be a strong indicator of malformed or low-quality reasoning. Finally, based on internal audits of synthetically generated datasets, we observed that some teacher models occasionally produce reasoning traces and final responses that implicitly align with specific political entities or promote nationalistic narratives. To mitigate this, we apply targeted keyword- and regex-based filters and remove all trajectories matching such behavior. Alongside the model, we release our final pre-training and post-training data, as outlined in this section. For ease of analysis, there is a sample set that is ungated. For all remaining code, math and multilingual data, gating and approval is required, and the dataset is permissively licensed for model training purposes. More details on the datasets and synthetic data generation methods can be found in the technical report NVIDIA Nemotron 3 Super.
Base Pre-Training Corpus (Nemotron 3 Foundation)
The foundation of the model is trained on the Nemotron-3-Nano corpus, comprising the following collections:| Dataset Collection | Token Counts | Description |
|---|---|---|
| Nemotron-CC-v2 & v2.1 | 9.13T | A massive collection of English web data filtered from Common Crawl, including 2.5T+ tokens of new organic, translated, and synthetically rephrased content. |
| Nemotron-CC-Code-v1 | 427.9B | High-quality code tokens extracted from Common Crawl using the Lynx + LLM pipeline to preserve structure and equations. |
| Nemotron-Pretraining-Code-v1 & v2 | 1.09T | Curated GitHub code references with multi-stage filtering, deduplication, and large-scale synthetic code data. |
| Nemotron-CC-Math-v1 | 133.3B | High-quality math pre-training dataset preserving LaTeX formatting and mathematical structures. |
| Nemotron-Pretraining-Specialized-v1 | 336.4B | Synthetic datasets targeting specialized domains such as STEM reasoning and scientific coding. |
Public Datasets
Crawled and Scraped from Online Sources by NVIDIA
The English Common Crawl data was downloaded from the Common Crawl Foundation (see their FAQ for details on their crawling) and includes the snapshots CC-MAIN-2013-20 through CC-MAIN-2025-13. The data was subsequently deduplicated and filtered in various ways described in the Nemotron-CC paper. Additionally, we extracted data for fifteen languages from the following three Common Crawl snapshots: CC-MAIN-2024-51, CC-MAIN-2025-08, CC-MAIN-2025-18. The fifteen languages included were Arabic, Chinese, Danish, Dutch, French, German, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, Swedish, and Thai. As we did not have reliable multilingual model-based quality classifiers available, we applied just heuristic filtering instead—similar to what we did for lower quality English data in the Nemotron-CC pipeline, but selectively removing some filters for some languages that did not work well. Deduplication was done in the same way as for Nemotron-CC. The GitHub Crawl was collected using the GitHub REST API and the Amazon S3 API. Each crawl was operated in accordance with the rate limits set by its respective source, either GitHub or S3. We collect raw source code and subsequently remove any having a license which does not exist in our permissive-license set (for additional details, refer to the technical report ).| Dataset | Modality | Dataset Size | Collection Period | Collecting Organisation |
|---|---|---|---|---|
| English Common Crawl | Text | 3.36T | 4/8/2025 | NVIDIA Advanced Deep Learning Research |
| English Common Crawl 1.1 | Text | Not disclosed | 10/2/2025 | NVIDIA Advanced Deep Learning Research |
| Multilingual Common Crawl | Text | 812.7B | 5/1/2025 | NVIDIA Advanced Deep Learning Research |
| GitHub Crawl | Text | 747.4B | 4/29/2025 | NVIDIA Advanced Deep Learning Research |
Private Non-publicly Accessible Datasets of Third Parties
| Dataset | Model(s) used |
|---|---|
| Global Regulation | Unknown |
| TAUS Translation Memory | Unknown |
| Scale HLE | Unknown |
| HackerRank Coding | Unknown |
| RL data for Search | Gemini 3; GPT-5 |
Private Non-publicly Accessible Datasets by NVIDIA
| Dataset | Model(s) used |
|---|---|
| Simple Minesweeper | - |
| Simple Sudoku | - |
| Multitool Typewriter Hard | - |
| Machine Translation of News Commentary and TAUS Translation Memory | - |
| Machine Translation of STEM - | Qwen2.5-14B-Instruct |
| Competitive Coding RL data from Nemotron Cascade | - |
| Long context RL | - |
| Single-step SWE RL for patch generation | - |
| OpenHands SWE | - |
NVIDIA-Sourced Synthetic Datasets
| Dataset | Modality | Dataset Size | Seed Dataset | Model(s) used for generation |
|---|---|---|---|---|
| Nemotron-Pretraining-Formal-Logic | Text | 128,022,285 | Nemotron Personas | Qwen3-235B-A22B-Thinking-2507 |
| Nemotron-Pretraining-Economics | Text | 73,374,154 | - | Qwen3-235B-A22B-Thinking-2507 |
| Nemotron-Pretraining-Multiple-Choice | Text | 1,609,214,470 | MMLU Auxiliary Train | DeepSeek-V3 ; Qwen3-235B-A22B |
| Nemotron-Pretraining-Code-Concepts | Text | 7,294,510,156 | - | gpt-oss-20b ; gpt-oss-120b |
| Nemotron-Pretraining-Unconditional-Algorithmic | Text | 196,492,899 | - | gpt-oss-120b ; Qwen3-235B-A22B |
| Synthetic Tasks from DeepSeek-V3 and Qwen3-235B-A22B | Text | 6.7B | Train splits of Into the Unknown; AI2 ARC (AI2 Reasoning Challenge); BLiMP (Benchmark of Linguistic Minimal Pairs); CommonSenseQA; GLUE; HeadQA; Hendrycks Ethics; Memo Trap; modus-tollens; NeQA; pattern-matching-suppression; mastermind_24_mcq_random; mastermind_24_mcq_close; quote-repetition; redefine-math; Repetitive Algebra; sig-figs; MMLU-Pro; MC-TACO; MedConceptsQA; MMLU_dataset; OpenbooksQA; PIQA (Physical Interaction Question Answering); SocialIQA; SuperGLUE; tinyAI2_arc; tinyMMLU; tinyWinogrande; TruthfulQA; WebQuestions; Winogrande; GPQA; MBPP | DeepSeek v3 ; Qwen3-235B-A22B |
| Synthetic Art of Problem Solving from DeepSeek-R1 | Text | 40B | Art of Problem Solving ; American Mathematics Competitions 8 ; American Mathematics Competitions 10 ; | DeepSeek-R1 |
| Synthetic Moral Stories and Social Chemistry from Mixtral-8x22B-v0.1 | Text | 327M | social-chemestry-101 ; Moral Stories | Mixtral-8x22B-v0.1 |
| Synthetic Social Sciences seeded with OpenStax from DeepSeek-V3, Mixtral-8x22B-v0.1, and Qwen2.5-72B | Text | 83.6M | OpenStax - CC BY-SA subset | DeepSeek-V3 ; Mixtral-8x22B-v0.1 ; Qwen2.5-72B |
| Synthetic Health Sciences seeded with OpenStax from DeepSeek-V3, Mixtral-8x22B-v0.1, and Qwen2.5-72B | Text | 9.7M | OpenStax - CC BY-SA subset | DeepSeek-V3 ; Mixtral-8x22B-v0.1 ; Qwen2.5-72B |
| Synthetic STEM seeded with OpenStax, Open Textbook Library, and GSM8K from DeepSeek-R1, DeepSeek-V3, DeepSeek-V3-0324, and Qwen2.5-72B | Text | 175M | OpenStax - CC BY-SA subset ; GSM8K ; Open Textbook Library - CC BY-SA & GNU subset | DeepSeek-R1 , DeepSeek-V3 ; DeepSeek-V3-0324 ; Qwen2.5-72B |
| Nemotron-PrismMath | Text | 4.6B | Big-Math-RL-Verified ; OpenR1-Math-220k | Qwen2.5-0.5B-instruct , Qwen2.5-72B-Instruct ; DeepSeek-R1-Distill-Qwen-32B |
| Synthetic Question Answering Data from Papers and Permissible Books from Qwen2.5-72B-Instruct | Text | 350M | arXiv ; National Institutes of Health ExPorter ; BioRxiv ; PMC Article ; USPTO Backgrounds ; peS2o ; Global Regulation; CORE ; PG-19 ; DOAB CC BY & CC BY-SA subset ; NDLTD | Qwen2.5-72B-Instruct |
| Refreshed Nemotron-MIND from phi-4 | Text | 73B | Common Crawl | phi-4 |
| Nemotron-CC-Math-4plus | Text | 52.3B | Common Crawl | phi-4 |
| Nemotron-CC-Math-3 | Text | 80.9B | Common Crawl | phi-4 |
| Synthetic AGIEval seeded with AQUA-RAT, LogiQA, and AR-LSAT from DeepSeek-V3 and DeepSeek-V3-0324 | Text | 4.0B | AQUA-RAT ; LogiQA ; AR-LSAT | DeepSeek-V3 ; DeepSeek-V3-0324 |
| Synthetic AGIEval seeded with AQUA-RAT, LogiQA, and AR-LSAT from Qwen3-30B-A3B | Text | 4.2B | AQUA-RAT ; LogiQA ; AR-LSAT | Qwen3-30B-A3B |
| Synthetic Art of Problem Solving from Qwen2.5-32B-Instruct, Qwen2.5-Math-72B, Qwen2.5-Math-7B, and Qwen2.5-72B-Instruct | Text | Art of Problem Solving ; American Mathematics Competitions 8 ; American Mathematics Competitions 10 ; GSM8K ; PRM800K | Qwen2.5-32B-Instruct ; Qwen2.5-Math-72B ; Qwen2.5-Math-7B ; Qwen2.5-72B-Instruct | |
| Synthetic MMLU Auxiliary Train from DeepSeek-R1 | Text | 0.5B | MMLU Auxiliary Train | DeepSeek-R1 |
| Synthetic Long Context Continued Post-Training Data from Papers and Permissible Books from Qwen2.5-72B-Instruct | Text | arXiv ; National Institutes of Health ExPorter ; BioRxiv ; PMC Article ; USPTO Backgrounds ; peS2o ; Global Regulation; CORE ; PG-19 ; DOAB CC BY & CC BY-SA subset ; NDLTD | Qwen2.5-72B-Instruct | |
| Synthetic Common Crawl from Qwen3-30B-A3B and Mistral-Nemo-12B-Instruct | Text | 415.8B | Common Crawl | Qwen3-30B-A3B ; Mistral-NeMo-12B-Instruct |
| Synthetic Multilingual Data from Common Crawl from Qwen3-30B-A3B | Text | Common Crawl | Qwen3-30B-A3B | |
| Synthetic Multilingual Data from Wikimedia from Qwen3-30B-A3B | Text | Wikimedia | Qwen3-30B-A3B | |
| Synthetic Math Data from Wikimedia from Nemotron-4-340B-Instruct | Text | - | Nemotron-4-340B-Instruct | |
| Synthetic Common Crawl Code from phi-4 | Text | 427.9B | Common Crawl | phi-4 |
| Synthetic Scientific Coding from Qwen3-235B-A22B | Text | 1.2B | Wikimedia | Qwen3-235B-A22B |
| Tool Calling Data | Text | 26.2B | Qwen3-235B-A22B-2507 ; gpt-oss-120b | |
| Synthetic Essential-Web from QwQ-32B | Text | 28.1B | Essential-Web | QwQ-32B |
| Translated Synthetic Crawl | Text | 389.9B | Common Crawl | Qwen3-30B-A3B |
| Translated Synthetic Wikipedia | Text | 7.9B | Wikimedia | Qwen3-30B-A3B |
| Synthetic Art of Problem Solving from gpt-oss-120b and Qwen2.5-32B-Instruct | Text | Undisclosed | Art of Problem Solving ; American Mathematics Competitions 8 ; American Mathematics Competitions 10 | gpt-oss-120b ; Qwen2.5-32B-Instruct |
| Synthetic Stack Exchange from gpt-oss-120b and Qwen2.5-32B-Instruct | Text | Undisclosed | Stack Exchange | gpt-oss-120b ; Qwen2.5-32B-Instruct |
| Synthetic OpenCodeReasoning from DeepSeek-R1-0528 | Text | Undisclosed | OpenCodeReasoning | DeepSeek-R1-0528 |
| Synthetic HackerRank Coding from DeepSeek-R1-0528 | Text | Undisclosed | HackerRank Coding Dataset | DeepSeek-R1-0528 |
| Synthetic SWE-Gym from Qwen3-Coder-480B-A35B-Instruct | Text | Undisclosed | SWE-Gym | Qwen3-Coder-480B-A35B-Instruct |
| Synthetic Art of Problem Solving and Stack Exchange from gpt-oss-120b, Qwen2.5-32B-Instruct, and Goedel-Prover-V2-32B | Text | Undisclosed | Art of Problem Solving ; American Mathematics Competitions 8 ; American Mathematics Competitions 10 ; Stack Exchange | gpt-oss-120b ; Qwen2.5-32B-Instruct ; Goedel-Prover-V2-32B |
| Synthetic Multilingual Science and Code data from DeepSeek-R1, DeepSeek-R1-0528, Qwen2.5-32B-Instruct, and Qwen3-235B-A22B, translated with Qwen2.5-32B-Instruct and Qwen2.5-14B-Instruct | Text | Undisclosed | Stack Exchange ; SCP-116K ; LIMO ; TACO ; Code Contest; Codeforces | DeepSeek-R1 ; DeepSeek-R1-0528 ; Qwen2.5-32B-Instruct ; Qwen3-235B-A22B ; |
| Synthetic Safety from DeepSeek-R1-0528, gpt-oss-120b and Mixtral-8x7B-v0.1 | Text | Undisclosed | Nemotron Content Safety Dataset V2 ; Gretel Synthetic Safety Alignment Dataset ; RedTeam-2K ; Malicious Tasks ; Nemotron-Personas-USA | DeepSeek-R1-0528 ; gpt-oss-120b ; Mixtral-8x7B-v0.1 |
| Synthetic STEM from Qwen3-235B-A22B-Instruct-2507 and gpt-oss-120b | Text | Undisclosed | arXiv ; National Institutes of Health ExPorter ; BioRxiv ; PMC Article ; USPTO Backgrounds ; peS2o ; Global Regulation; CORE ; PG-19 ; DOAB CC BY & CC BY-SA subset ; NDLTD | Qwen3-235B-A22B-Instruct-2507 ; gpt-oss-120b |
| Synthetic KernelBook from DeepSeek-R1-0528 | Text | Undisclosed | KernelBook | DeepSeek-R1-0528 |
| Synthetic Tool Calling from Qwen3-235B-A22B-Thinking-2507 and Qwen3-Next-80B-A3B-Thinking | Text | Undisclosed | ToolBench ; glaive-function-calling-v2 ; APIGen Function-Calling ; Nemotron-Personas-USA | Qwen3-235B-A22B-Thinking-2507 ; Qwen3-Next-80B-A3B-Thinking |
| Synthetic Chat from gpt-oss-120b, Mixtral-8x22B-Instruct-v0.1, Qwen3-235B-A22B-Instruct-2507 , and Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | C4 ; LMSYS-Chat-1M ; ShareGPT ; GSM8K ; PRM800K ; FinQA ; WikiTableQuestions ; Riddles ; glaive-function-calling-v2 ; SciBench ; tigerbot-kaggle-leetcodesolutions-en-2k ; OpenBookQA ; Advanced Reasoning Benchmark ; Software Heritage; Khan Academy Math Keywords ; WildChat-1M ; Nemotron-Personas-USA | gpt-oss-120b ; Mixtral-8x22B-Instruct-v0.1 ; Qwen3-235B-A22B-Instruct-2507 ; Qwen3-235B-A22B-Thinking-2507 |
| Synthetic Long Context from Qwen3-235B-A22B-Instruct-2507 | Text | Undisclosed | CORE ; PG-19 ; DOAB CC BY & CC BY-SA subset ; NDLTD | Qwen3-235B-A22B-Instruct-2507 |
| Synthetic Tool Use Interactive Agent from gpt-oss-120b, DeepSeek-R1-0528, Qwen3-32B, and Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | NVIDIA Internal | gpt-oss-120b ; DeepSeek-R1-0528 ; Qwen3-32B ; and Qwen3-235B-A22B-Thinking-2507 |
| Synthetic STEM from Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | ICHO-IPH0 ; Physics Big ; Scale HLE; OpenMathReasoning ; OpenCodeReasoning | Qwen3-235B-A22B-Thinking-2507 |
| Synthetic DocFinQA and SWE-smith from Qwen3-Coder-480B-A35B-Instruct and Kimi-K2-Thinking | Text | Undisclosed | DocFinQA ; SWE-smith | Qwen3-Coder-480B-A35B-Instruct ; Kimi-K2-Thinking |
| Synthetic Math from gpt-oss-120b and Qwen2.5-32B-Instruct | Text | Undisclosed | - | gpt-oss-120b ; Qwen2.5-32B-Instruct |
| Synthetic Essential-Web from gpt-oss-120b | Text | Undisclosed | Essential-Web | gpt-oss-120b |
| Synthetic Scale HLE from gpt-oss-120b | Text | Undisclosed | Scale HLE | gpt-oss-120b |
| Synthetic CDQuestions from gpt-oss-120b | Text | Undisclosed | CDQuestions | gpt-oss-120b |
| Synthetic Stack Exchange from gpt-oss-120b | Text | Undisclosed | Stack Exchange | gpt-oss-120b |
| Synthetic GPQA from gpt-oss-120b and Qwen2.5-32B-Instruct | Text | Undisclosed | Stack Exchange | gpt-oss-120b ; Qwen2.5-32B-Instruct |
| Synthetic Vedantu from gpt-oss-120b | Text | Undisclosed | Vedantu | gpt-oss-120b |
| Synthetic SWE-Gym and R2E-Gym-Subset from Qwen3-Coder-480B-A35B-Instruct | Text | Undisclosed | SWE-Gym ; R2E-Gym-Subset | Qwen3-Coder-480B-A35B-Instruct |
| Synthetic SWE-Gym from Qwen3-Coder-480B-A35B-Instruct | Text | Undisclosed | SWE-Gym | Qwen3-Coder-480B-A35B-Instruct |
| Synthetic SWE-Gym and R2E-Gym-Subset from DeepSeek-R1-0528 | Text | Undisclosed | SWE-Gym ; R2E-Gym-Subset | DeepSeek-R1-0528 |
| Synthetic HelpSteer, LMSYS-Chat-1M, and Nemotron-Personas-USA from gpt-oss-120b, Qwen3-235B-A22B-Instruct-2507, and Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | HelpSteer2 ; HelpSteer3 ; LMSYS-Chat-1M ; Nemotron-Personas-USA | gpt-oss-120b ; Qwen3-235B-A22B-Instruct-2507 ; Qwen3-235B-A22B-Thinking-2507 |
| Synthetic Structured Outputs from Qwen3-30B-A3B-Instruct-2507, Qwen3-30B-A3B-Thinking-2507, Qwen3-235B-A22B-Instruct-2507, and Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | - | Qwen3-30B-A3B-Instruct-2507 ; Qwen3-30B-A3B-Thinking-2507 ; Qwen3-235B-A22B-Instruct-2507 ; Qwen3-235B-A22B-Thinking-2507 |
| Synthetic Search STEM MCQ from Qwen3-235B-A22B and DeepSeek-R1-0528 | Text | Undisclosed | - | Qwen3-235B-A22B ; DeepSeek-R1-0528 |
| Synthetic Search STEM OPENQ from DeepSeek-R1-0528 | Text | Undisclosed | - | DeepSeek-R1-0528 |
| Synthetic OpenSTEM from Qwen2.5-32B-Instruct and DeepSeek-R1-0528 | Text | Undisclosed | - | Qwen2.5-32B-Instruct ; DeepSeek-R1-0528 |
| Synthetic MCQ from Qwen2.5-32B-Instruct and DeepSeek-R1-0528 | Text | Undisclosed | - | Qwen2.5-32B-Instruct ; DeepSeek-R1-0528 |
| Synthetic MCQ10 from DeepSeek-R1-0528 | Text | Undisclosed | - | DeepSeek-R1-0528 |
| Synthetic MCQ4 from Qwen3-235B-A22B, DeepSeek-R1-0528, and Qwen3-235B-A22B-Instruct-2507 | Text | Undisclosed | - | Qwen3-235B-A22B ; DeepSeek-R1-0528 ; Qwen3-235B-A22B-Instruct-2507 |
| Synthetic OpenMathReasoning from gpt-oss-120b and Qwen2.5-32B-Instruct | Text | Undisclosed | OpenMathReasoning | gpt-oss-120b ; Qwen2.5-32B-Instruct |
| Synthetic Offline Search MCQA HLE from DeepSeek-R1-0528 | Text | Undisclosed | - | DeepSeek-R1-0528 |
| Synthetic Offline Search MCQA GPQA from Qwen3-235B-A22B and DeepSeek-R1-0528 | Text | Undisclosed | - | Qwen3-235B-A22B ; DeepSeek-R1-0528 |
| Synthetic Human Preference from QwQ-32B, Qwen3-30B-A3B, Qwen3-235B-A22B, Qwen3-235B-A22B-Instruct-2507, Mistral-Small-3.1-24B-Instruct-2503, Mistral-Small-3.2-24B-Instruct-2506, MiniMax-M1-80k, MiniMax-M1-40k, Kimi-K2-Instruct, DeepSeek-V3-0324, DeepSeek-R1-0528 | Text | Undisclosed | - | QwQ-32B ; Qwen3-30B-A3B ; Qwen3-235B-A22B ; Qwen3-235B-A22B-Instruct-2507 ; Mistral-Small-3.1-24B-Instruct-2503 ; Mistral-Small-3.2-24B-Instruct-2506 ; MiniMax-M1-80k ; MiniMax-M1-40k ; Kimi-K2-Instruct ; DeepSeek-V3-0324 ; DeepSeek-R1-0528 |
| Synthetic WildChat-1M and arena-human-preference-140k from DeepSeek-R1, gemma-2-2b-it, gemma-3-27b-it, gpt-oss-20b, gpt-oss-120b, Mistral-7B-Instruct-v0.3, Mixtral-8x22B-Instruct-v0.1, Nemotron-4-340B-Instruct, NVIDIA-Nemotron-Nano-9B-v2, Phi-4-mini-instruct, Phi-3-small-8k-instruct, Phi-3-medium-4k-instruct, Qwen3-235B-A22B, QwQ-32B | Text | Undisclosed | WildChat-1M ; arena-human-preference-140k | DeepSeek-R1 ; gemma-2-2b-it ; gemma-3-27b-it ; gpt-oss-20b ; gpt-oss-120b ; Mistral-7B-Instruct-v0.3 ; Mixtral-8x22B-Instruct-v0.1 ; Nemotron-4-340B-Instruct ; NVIDIA-Nemotron-Nano-9B-v2 ; Phi-4-mini-instruct ; Phi-3-small-8k-instruct ; Phi-3-medium-4k-instruct ; Qwen3-235B-A22B ; QwQ-32B |
| Synthetic Safety from DeepSeek-R1-0528, gpt-oss-120b, DeepSeek-R1-Distill-Qwen-7B, and Mixtral-8x7B-v0.1 | Text | Undisclosed | Nemotron Content Safety Dataset V2 ; Gretel Synthetic Safety Alignment Dataset ; RedTeam-2K ; Malicious Tasks ; | DeepSeek-R1-0528 ; gpt-oss-120b ; DeepSeek-R1-Distill-Qwen-7B ; Qwen3-30B-A3B-Thinking-2507 ; Qwen3-235B-A22B-Instruct-2507 ; Mixtral-8x7B-v0.1 |
| Synthetic Code from Qwen3-32B | Text | Undisclosed | English Common Crawl; English Common Crawl 1.1 | Qwen3-32B |
| Synthetic OpenCodeReasoning from DeepSeek-R1 | Text | Undisclosed | OpenCodeReasoning | DeepSeek-R1 |
| Synthetic LIMO from DeepSeek-R1-0528 | Text | Undisclosed | LIMO | DeepSeek-R1-0528 |
| Synthetic SCP from DeepSeek-R1-0528 | Text | Undisclosed | SCP-116K | DeepSeek-R1-0528 |
| Synthetic Stack Exchange from DeepSeek-R1-0528 | Text | Undisclosed | Stack Exchange | DeepSeek-R1-0528 |
| Synthetic Common Crawl from Qwen3-30B-A3B | Text | Undisclosed | Common Crawl | Qwen3-30B-A3B |
| Synthetic Wikipedia from Qwen3-30B-A3B | Text | Undisclosed | Wikimedia | Qwen3-30B-A3B |
| Synthetic Essential-Web from Qwen3-30B-A3B and Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | Essential-Web | Qwen3-30B-A3B ; Qwen3-235B-A22B-Thinking-2507 |
| Synthetic Textbook Math from Qwen3-30B-A3B, Qwen3-235B-A22B, phi-4 | Text | Undisclosed | Common Crawl ; FineMath | Qwen3-30B-A3B ; Qwen3-235B-A22B ; phi-4 |
| Synthetic Math and Code from DeepSeek-R1 and DeepSeek-R1-0528 | Text | Undisclosed | Magicoder-Evol-Instruct-110K ; opc-sft-stage2 ; TACO ; OpenCodeReasoning ; OpenMathReasoning ; NuminaMath CoT | DeepSeek-R1 ; DeepSeek-R1-0528 |
| Synthetic Nemotron-Personas-USA from gpt-oss-120b and Qwen3-8B | Text | Undisclosed | Nemotron-Personas-USA | gpt-oss-120b ; Qwen3-8B |
| Synthetic Text-To-SQL | Text | Undisclosed | - | gpt-oss-120b |
| Synthetic Agentless SWE | Text | Undisclosed | SWE-Bench-Train ; SWE-Fixer-Train ; SWE-reBench ; SWE-smith | DeepSeek-R1-0528 |
| Synthetic Search Graph Walk | Text | Undisclosed | - | MiniMax-M2 |
| Synthetic CUDA 100k | Text | Undisclosed | KernelBook ; HuggingFace Transformers ; FlashInfer | DeepSeek-R1-0528 ; gpt-oss-120b |
| Synthetic Safety | Text | Undisclosed | Nemotron Content Safety Dataset V2 ; Gretel Synthetic Safety Alignment Dataset ; RedTeam-2K ; HarmfulTasks | gpt-oss-120b ; NVIDIA-Nemotron-Nano-9B-v2 ; gemma-3-4b-it |
| Synthetic Agentic Diverse Domains | Text | Undisclosed | - | DeepSeek-R1-0528 ; Qwen3-235B-A22B-Thinking-2507 ; Qwen3-235B-A22B-Instruct-2507 ; Qwen3-32B ; gpt-oss-120b ; DeepSeek-V3.2 |
| Synthetic SWE Unverified | Text | Undisclosed | - | gpt-oss-120b ; Qwen3-Coder-480B-A35B-Instruct ; GLM-4.7-Flash |
| Synthetic Scale HLE from Deepseek-V3 | Text | Undisclosed | Scale HLE | DeepSeek-V3-0324 |
| Synthetic CDQuestions from Deepseek-V3 | Text | Undisclosed | CDQuestions | DeepSeek-V3-0324 |
| Synthetic Stack Exchange from Deepseek-V3 | Text | Undisclosed | Stack Exchange | DeepSeek-V3-0324 |
| Synthetic GPQA from Deepseek-V3 | Text | Undisclosed | Stack Exchange | DeepSeek-V3-0324 |
| Synthetic Vedantu from Deepseek-V3 | Text | Undisclosed | Vedantu | DeepSeek-V3-0324 |
| Synthetic Tool Call Schema for RL | Text | Undisclosed | ToolBench ; glaive-function-calling-v2 ; APIGen Function-Calling ; Nemotron-Personas-USA | Qwen3-235B-A22B-Thinking-2507 ; Qwen3-Next-80B-A3B-Thinking |
| Synthetic Data for Search | Text | Undisclosed | Wikimedia | MiniMax-M2 |
| Synthetic Instruction Following for RL | Text | Undisclosed | - | NVIDIA-Nemotron-Nano-9B-v2 ; Qwen3-235B-A22B-Thinking-2507 |
| Synthetic Conversational Agentic Tool-Use RL | Text | Undisclosed | - | DeepSeek-V3.2 ; DeepSeek-R1-0528 ; Qwen3-235B-A22B-Thinking-2507 ; Qwen3-32B ; gpt-oss-120b ; Qwen3-235B-A22B-Instruct-2507 |
| Synthetic Terminal Pivot RL | Text | Undisclosed | SWE-smith ; Nemotron-Cascade-RL-SWE ; Vendor supplied | DeepSeek-V3.2 ; Qwen3-Coder-480B-A35B-Instruct ; Kimi-K2.5 ; Qwen3-235B-A22B-Instruct-2507 |
Language Distribution in Post-Training
For our post-training recipe, we focused on 9 main languages in addition to English: French, German, Italian, Japanese, Spanish, and Chinese Those languages were represented in the form of multilingual reasoning and translation tasks. The following table depicts our sample distribution for the 6 languages and 5 translation pairs.| Language | Size |
|---|---|
| English | 13.48M |
| Italian | 53k |
| German | 53k |
| Spanish | 53k |
| French | 53k |
| Japanese | 53k |
| Chinese | 53k |
| English <-> Italian | 43.2k |
| English <-> German | 43.2k |
| English <-> Spanish | 43.2k |
| English <-> French | 43.2k |
| English <-> Japanese | 43.2k |
Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. We advise against circumvention of any provided safety guardrails contained in the Model without a substantially similar guardrail appropriate for your use case. For more details, see the Safety Explainability sections below. For more detailed information on ethical considerations for this model, please see the Model Card Bias and Privacy sections below. Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here .Safety
| Field | Response |
|---|---|
| Model Application Field(s): | Chat, Instruction Following, Chatbot Development, Code Generation, Reasoning, Customer Service |
| Describe the life critical impact (if present). | Not Applicable |
| Description of methods implemented in data acquisition or processing, if any, to address other types of potentially harmful data in the training, testing, and validation data: | We used a guard model for content safety to exclude potentially harmful data from training. |
| Description of any methods implemented in data acquisition or processing, if any, to address illegal or harmful content in the training data, including, but not limited to, child sexual abuse material (CSAM) and non-consensual intimate imagery (NCII) | We used a Gemma-3 4B-based guard model trained on Nemotron Content Safety Dataset v2 for content safety to exclude potentially illegal or harmful content from the training. |
| Use Case Restrictions: | Abide by the NVIDIA Nemotron Open Model License Agreement . |
| Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. |
| This AI model was developed based on our policies to ensure responsible data handling and risk mitigation. The datasets used for training have been scanned for harmful content and illegal content, consistent with our policies including scanning for Child Sexual Abuse Material (CSAM). Ongoing review and monitoring mechanisms are in place based on our policies and to maintain data integrity. | True. We use Nemotron Content Safety Dataset V2 and an internal safety dataset specialized for minority sexuality for content safety evaluation to ensure the safety of this model. |
Privacy
| Field | Response |
|---|---|
| Generatable or reverse engineerable personal data? | No |
| Personal data used to create this model? | No |
| Was consent obtained for any personal data used? | Not Applicable |
| A description of any methods implemented in data acquisition or processing, if any, to address the prevalence of personal data in the training data, where relevant and applicable. | We used only prompts that do not contain any personal data for synthetic data generation. |
| How often is the dataset reviewed? | Before Release |
| Is there provenance for all datasets used in training? | Yes |
| Does data labeling (annotation, metadata) comply with privacy laws? | Yes |
| Is data compliant with data subject requests for data correction or removal, if such a request was made? | No, not possible with externally-sourced data. |
| Applicable Privacy Policy | NVIDIA Privacy Policy |
| During AI model development, strict adherence to copyright policy ensured compliance through risk mitigation and legal reviews. Post-data collection, reserved rights content is identified and removed, with verified opt-out processes for rightsholders. Detailed records document due diligence and transparency. | True |
| We employ automated tools and data processing techniques during data preparation to identify and filter certain categories of personal information. Scans of training datasets detected no PII. | True. We employ automated tools and data processing techniques to scan for Personally Identifiable Information (PII) during data preparation to identify and filter certain categories of personal information, including phone numbers, email addresses, credit card numbers, and public-facing contact details. Scans of Common Crawl, CC-News, and Wikimedia datasets did not detect PII in the majority of samples; however, Microsoft Presidio indicated potential findings including business contact information embedded in natural language, such as email addresses and phone numbers. These were removed using verified instances of PII through a combination of automated filtering and human-in-the-loop validation. In contrast, scans of financial reasoning datasets, including NVIDIA-created and web-scraped datasets, via Presidio Analyzer, indicated false positives such as numerical sequences, and did not indicate any verified instances of PII. This evaluation used a 3,000-sample subset per dataset, identified as the optimal threshold for maximizing embedder accuracy. |
| Privacy Testing: | Constrained to English-language inputs. Multi-lingual parity is not currently claimed or guaranteed. |
Explainability
| Field | Response |
|---|---|
| Intended Task/Domain: | Text generation, reasoning, and chat |
| Model Type: | Text-to-text Mamba2-Transformer Hybrid |
| Intended Users: | Generative AI creators working with conversational AI models and image content. |
| Output: | Text |
| Tools used to evaluate datasets to identify synthetic data and ensure data authenticity. | We used a Gemma-3 4B-based filtering model fine-tuned on Nemotron Content Safety Dataset v2 to ensure the quality of synthetic data. |
| Describe how the model works: | Generates text by predicting the next word or token based on the context provided in the input sequence using multiple self-attention layers. |
| Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: | Age, Disability Status, Gender Identity, Nationality, Physical Appearance, Ethnicity, Socioeconomic Status, Sexual Orientation, Religion |
| Technical Limitations & Mitigation: | This model performs particularly well in instruction following regimes, as such may be strongly influenced by untrusted inputs and should be paired with appropriate guardrails and data filtering to better align use-case behaviors when exposed to such data. |
| Verified to have met prescribed NVIDIA quality standards: | Yes |
| Performance Metrics: | Accuracy, Throughput, and User-side throughput |
| Potential Known Risks: | The model was optimized explicitly for instruction following and as such may be influenced by untrusted inputs (prompt injection, indirect prompt injection, jailbreaking, web search, etc.) as a result of its instruction tuning that may degrade safety alignment and other training efforts. This model should be paired with additional guardrails and data filtering to limit exposure to instructions from malicious sources. Bypassing of safety alignment, system guardrails, and filters may allow harmful outcomes up to and including remote code execution in some agentic systems when effective security controls are not in place. The model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may generate and amplify harmful, biased, or otherwise unsafe content reinforcing these biases and return toxic responses especially when prompted with toxic prompts. The model may also generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. The model may exhibit self-anthropomorphism (e.g., displaying human-like characteristics in dialogue, such as expressing preferences and emotions). In integrated system contexts, the model could potentially be exploited to access or disclose information beyond the model’s intended permissions or scope of operation. |
| Licensing: | NVIDIA Nemotron Open Model License Agreement |
Bias
| Field | Response |
|---|---|
| Participation considerations from adversely impacted groups protected classes in model design and testing: | None |
| Bias Metric (If Measured): | BBQ Accuracy Scores in Ambiguous Contexts |
| Which characteristic (feature) show(s) the greatest difference in performance?: | The model shows high variance in the characteristics when it is used with a high temperature. |
| Which feature(s) have the worst performance overall? | Physical Appearance |
| Measures taken to mitigate against unwanted bias: | Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) employed to calibrate the model’s reasoning capabilities with to maintain logical consistency and appropriate complexity when interacting with or interpreting data from diverse age demographics. |
| If using internal data, description of methods implemented in data acquisition or processing, if any, to address the prevalence of identifiable biases in the training, testing, and validation data: | The training datasets contain a large amount of synthetic data generated by LLMs. We manually curated prompts. |
| Tools used to assess statistical imbalances and highlight patterns that may introduce bias into AI models: | BBQ |
| Tools used to assess statistical imbalances and highlight patterns that may introduce bias into AI models: | These datasets, such as web-scraped finance reasoning data, do not collectively or exhaustively represent all demographic groups (and proportionally therein). For instance, these datasets do not contain explicit mentions of the following classes: age, gender, or ethnicity in approximately 97% to 99% of samples. Finance reasoning data scraped from SEC EDGAR contained a notable representational skew where ethnicity mentions are dominated by Middle Eastern contexts (found in finance documents), while gender is explicitly mentioned in only 0.9% of samples (including Male-only, Female-only, and Both). To mitigate these imbalances, we recommend considering these evaluation techniques such as bias audits, fine-tuning with demographically balanced datasets, and mitigation strategies such as counterfactual data augmentation to align with the desired model behavior. This evaluation used a 3,000-sample subset per dataset, identified as the optimal threshold for maximizing embedder accuracy. |
| Unwanted Bias Testing: | Constrained to English-language inputs. Multi-lingual parity is not currently claimed or guaranteed. |
Citation
@misc{nvidia_nemotron_3_2025,
title = {NVIDIA Nemotron 3: Efficient and Open Intelligence},
author = {{NVIDIA}},
year = {2025},
url = {https://arxiv.org/abs/2512.20856},
note = {White Paper}
}
Evaluation Dataset
- Data Collection Method by dataset: Hybrid: Human, Synthetic
- Labeling Method by dataset: Hybrid: Automated, Human, Synthetic
Inference
- Acceleration Engine: PyTorch
- Test Hardware:
- NVIDIA Hopper
- 1-8x H100
- 1-8x H200
- NVIDIA Grace Blackwell
- GB200
Model Specifications
LicenseCustom
Last UpdatedMarch 2026
Input TypeText
Output TypeText
ProviderNvidia
Languages7 Languages