NVIDIA Nemotron 3 Nano NIM microservice
Version: 1
Nemotron 3 Nano is a large language model (LLM) trained from scratch by NVIDIA, and designed as a unified model for both reasoning and non-reasoning tasks. It responds to user queries and tasks by first generating a reasoning trace and then concluding with a final response. The model's reasoning capabilities can be configured through a flag in the chat template. If the user prefers the model to provide its final answer without intermediate reasoning traces, it can be configured to do so, albeit with a slight decrease in accuracy for harder prompts that require reasoning. Conversely, allowing the model to generate reasoning traces first generally results in higher-quality final solutions to queries and tasks.
The model employs a hybrid Mixture-of-Experts (MoE) architecture, consisting of 23 Mamba-2 and MoE layers, along with 6 Attention layers. Each MoE layer includes 128 experts plus 1 shared expert, with 5 experts activated per token. The model has 3.5B active parameters and 30B parameters in total. The supported languages include: English, German, Spanish, French, Italian, and Japanese. Improved using Qwen. This model is ready for commercial use.
Format: String
Parameters: Text Sequences (1D)
Maximum Input Size: 128K Tokens
Languages Supported: English, Spanish, French, German, Japanese, Italian
Format: String
Parameters: Text Sequences (1D)
Maximum Output Size: 128K Tokens Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. NVIDIA AI Enterprise
NVIDIA AI Enterprise is an end-to-end, cloud-native software platform that accelerates data science pipelines and streamlines development and deployment of production-grade co-pilots and other generative AI applications. Easy-to-use microservices provide optimized model performance with enterprise-grade security, support, and stability to ensure a smooth transition from prototype to production for enterprises that run their businesses on AI.
Input
Type(s): TextFormat: String
Parameters: Text Sequences (1D)
Maximum Input Size: 128K Tokens
Languages Supported: English, Spanish, French, German, Japanese, Italian
Output
Type(s): TextFormat: String
Parameters: Text Sequences (1D)
Maximum Output Size: 128K Tokens Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. NVIDIA AI Enterprise
NVIDIA AI Enterprise is an end-to-end, cloud-native software platform that accelerates data science pipelines and streamlines development and deployment of production-grade co-pilots and other generative AI applications. Easy-to-use microservices provide optimized model performance with enterprise-grade security, support, and stability to ensure a smooth transition from prototype to production for enterprises that run their businesses on AI.
Intended Use Case
NVIDIA Nemotron 3 Nano is a general purpose reasoning and chat model intended to be used in English and coding languages. Other non-English languages (English, Spanish, French, German, Japanese, Italian) are also supported. This model is intended to be used by developers designing AI Agent systems, chatbots, RAG systems, and other AI-powered applications. This model is also suitable for typical instruction-following tasks.Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our Trustworthy AI terms of service , developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. We advise against circumvention of any provided safety guardrails contained in the Model without a substantially similar guardrail appropriate for your use case. For more details: Safety & Security . For more detailed information on ethical considerations for this model, please see the Model Card++ Bias , Explainability , and Privacy Subcards. Please report security vulnerabilities or NVIDIA AI Concerns here .Example Curl Requests
Chat Completions API#!/bin/bash
curl -X 'POST' \
'<ENDPOINT_URL>/v1/chat/completions' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer <API_KEY>" \
-d '{
"messages": [
{
"role": "system",
"content": "/think"
},
{
"role": "user",
"content": "Write a limerick about the wonders of GPU computing."
}
],
"max_tokens": 256
}'
#!/bin/bash
curl -X 'POST' \
'<ENDPOINT_URL>/v1/responses' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer <API_KEY>" \
-d '{
"model": "nvidia/nemotron-3-nano",
"input": "Hello, how are you?",
"max_output_tokens": 128,
"stream": false
}'
Training, Testing, and Evaluation Datasets
| Property | Value |
|---|---|
| Data Modality | Text |
| Total Size | 10,648,823,153,919 Tokens |
| Total Number of Datasets | 141 |
| Dataset Partition | Training [100%], Testing [0%], Validation [0%] |
| Training Data Collection Period | 2013 to May 1, 2025 |
| Testing Data Collection Period | 2013 to May 1, 2025 |
| Validation Data Collection Period | 2013 to May 1, 2025 |
| Data Collection Method | Hybrid: Automated, Human, Synthetic |
| Labeling Method | Hybrid: Automated, Human, Synthetic |
Public Datasets
| Dataset | Collection Period |
|---|---|
| GSM8K | 4/23/2025 |
| CC-NEWS | 4/23/2025 |
| Common Crawl | 4/23/2025 |
| Wikimedia | 4/23/2025 |
| Bespoke-Stratos-17k | 4/23/2025 |
| tigerbot-kaggle-leetcodesolutions-en-2k | 4/23/2025 |
| glaive-function-calling-v2 | 4/23/2025 |
| APIGen Function-Calling | 4/23/2025 |
| LMSYS-Chat-1M | 4/23/2025 |
| Open Textbook Library - CC BY-SA & GNU subset and OpenStax - CC BY-SA subset | 4/23/2025 |
| Advanced Reasoning Benchmark, tigerbot-kaggle-leetcodesolutions-en-2k, PRM800K, and SciBench | 4/23/2025 |
| FineWeb-2 | 4/23/2025 |
| Court Listener | Legacy Download |
| peS2o | Legacy Download |
| OpenWebMath | Legacy Download |
| BioRxiv | Legacy Download |
| PMC Open Access Subset | Legacy Download |
| OpenWebText2 | Legacy Download |
| Stack Exchange Data Dump | Legacy Download |
| PubMed Abstracts | Legacy Download |
| NIH ExPorter | Legacy Download |
| arXiv | Legacy Download |
| BigScience Workshop Datasets | Legacy Download |
| Reddit Dataset | Legacy Download |
| SEC's Electronic Data Gathering, Analysis, and Retrieval (EDGAR) | Legacy Download |
| Advanced Mathematical Problem Solving | Legacy Download |
| MathPile | Legacy Download |
| NuminaMath CoT | Legacy Download |
| PMC Article | Legacy Download |
| FLAN | Legacy Download |
| Advanced Reasoning Benchmark | Legacy Download |
| SciBench | Legacy Download |
| WikiTableQuestions | Legacy Download |
| FinQA | Legacy Download |
| Riddles | Legacy Download |
| Problems in Elementary Mathematics for Home Study | Legacy Download |
| MedMCQA | Legacy Download |
| Cosmos QA | Legacy Download |
| MCTest | Legacy Download |
| AI2's Reasoning Challenge | Legacy Download |
| OpenBookQA | Legacy Download |
| MMLU Auxiliary Train | Legacy Download |
| social-chemestry-101 | Legacy Download |
| Moral Stories | Legacy Download |
| The Common Pile v0.1 | Legacy Download |
| FineMath | Legacy Download |
| MegaMath | Legacy Download |
| MegaMath | Legacy Download |
| MultiverseMathHard | 10/2/2025 |
| SWE-Gym | 10/2/2025 |
| WorkBench | 10/2/2025 |
| WildChat-1M | 10/2/2025 |
| OpenCodeReasoning-2 | 10/2/2025 |
| HelpSteer3 | 10/2/2025 |
| opc-sft-stage2 | 10/2/2025 |
| Big-Math-RL-Verified | 10/2/2025 |
| NuminaMath CoT | 10/2/2025 |
| MetaMathQA | 10/2/2025 |
| simple-arithmetic-problems | 10/2/2025 |
| arithmetic | 10/2/2025 |
| Skywork-OR1-RL-Data | 10/2/2025 |
| News Commentary | 10/2/2025 |
| FastChat | 10/2/2025 |
| Essential-Web | 10/2/2025 |
| finepdfs | 10/2/2025 |
| HotpotQA | 10/2/2025 |
| SQuAD2.0 | 10/2/2025 |
| NLTK Words Lists | 10/2/2025 |
Private Non-publicly Accessible Datasets of Third Parties
| Dataset |
|---|
| Global Regulation |
| TAUS Translation Memory |
| Scale HLE |
| HackerRank Coding |
Private Non-publicly Accessible Datasets by NVIDIA
| Dataset |
|---|
| Simple Minesweeper |
| Simple Sudoku |
| Multitool Typewriter Hard |
| Machine Translation of News Commentary and TAUS Translation Memory |
| Machine Translation of STEM data using Qwen2.5-14B-Instruct |
Crawled and Scraped from Online Sources by NVIDIA
The English Common Crawl data was downloaded from the Common Crawl Foundation (see their FAQ for details on their crawling) and includes the snapshots CC-MAIN-2013-20 through CC-MAIN-2025-13. The data was subsequently deduplicated and filtered in various ways described in the Nemotron-CC paper. Additionally, we extracted data for fifteen languages from the following three Common Crawl snapshots: CC-MAIN-2024-51, CC-MAIN-2025-08, CC-MAIN-2025-18. The fifteen languages included were Arabic, Chinese, Danish, Dutch, French, German, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, Swedish, and Thai. As we did not have reliable multilingual model-based quality classifiers available, we applied just heuristic filtering instead—similar to what we did for lower quality English data in the Nemotron-CC pipeline, but selectively removing some filters for some languages that did not work well. Deduplication was done in the same way as for Nemotron-CC. The GitHub Crawl was collected using the GitHub REST API and the Amazon S3 API. Each crawl was operated in accordance with the rate limits set by its respective source, either GitHub or S3. We collect raw source code and subsequently remove any having a license which does not exist in our permissive-license set (for additional details, refer to the technical report ).| Dataset | Modality | Dataset Size | Collection Period | Collecting Organisation |
|---|---|---|---|---|
| English Common Crawl | Text | 3.36T | 4/8/2025 | NVIDIA Advanced Deep Learning Research |
| English Common Crawl 1.1 | Text | Not disclosed | 10/2/2025 | NVIDIA Advanced Deep Learning Research |
| Multilingual Common Crawl | Text | 812.7B | 5/1/2025 | NVIDIA Advanced Deep Learning Research |
| GitHub Crawl | Text | 747.4B | 4/29/2025 | NVIDIA Advanced Deep Learning Research |
NVIDIA-Sourced Synthetic Datasets
| Dataset | Modality | Dataset Size | Seed Dataset | Model(s) used for generation |
|---|---|---|---|---|
| Synthetic Art of Problem Solving from DeepSeek-R1 | Text | 40B | Art of Problem Solving; American Mathematics Competitions 8; American Mathematics Competitions 10 | DeepSeek-R1 |
| Synthetic Moral Stories and Social Chemistry from Mixtral-8x22B-v0.1 | Text | 327M | social-chemestry-101; Moral Stories | Mixtral-8x22B-v0.1 |
| Synthetic Social Sciences seeded with OpenStax from DeepSeek-V3, Mixtral-8x22B-v0.1, and Qwen2.5-72B | Text | 83.6M | OpenStax - CC BY-SA subset | DeepSeek-V3; Mixtral-8x22B-v0.1; Qwen2.5-72B |
| Synthetic Health Sciences seeded with OpenStax from DeepSeek-V3, Mixtral-8x22B-v0.1, and Qwen2.5-72B | Text | 9.7M | OpenStax - CC BY-SA subset | DeepSeek-V3; Mixtral-8x22B-v0.1; Qwen2.5-72B |
| Synthetic STEM seeded with OpenStax, Open Textbook Library, and GSM8K from DeepSeek-R1, DeepSeek-V3, DeepSeek-V3-0324, and Qwen2.5-72B | Text | 175M | OpenStax - CC BY-SA subset; GSM8K; Open Textbook Library - CC BY-SA & GNU subset | DeepSeek-R1, DeepSeek-V3; DeepSeek-V3-0324; Qwen2.5-72B |
| Nemotron-PrismMath | Text | 4.6B | Big-Math-RL-Verified; OpenR1-Math-220k | Qwen2.5-0.5B-instruct, Qwen2.5-72B-Instruct; DeepSeek-R1-Distill-Qwen-32B |
| Synthetic Question Answering Data from Papers and Permissible Books from Qwen2.5-72B-Instruct | Text | 350M | arXiv; National Institutes of Health ExPorter; BioRxiv; PMC Article; USPTO Backgrounds; peS2o; Global Regulation; CORE; PG-19; DOAB CC BY & CC BY-SA subset; NDLTD | Qwen2.5-72B-Instruct |
| Refreshed Nemotron-MIND from phi-4 | Text | 73B | Common Crawl | phi-4 |
| Nemotron-CC-Math-4plus | Text | 52.3B | Common Crawl | phi-4 |
| Nemotron-CC-Math-3 | Text | 80.9B | Common Crawl | phi-4 |
| Synthetic AGIEval seeded with AQUA-RAT, LogiQA, and AR-LSAT from DeepSeek-V3 and DeepSeek-V3-0324 | Text | 4.0B | AQUA-RAT; LogiQA; AR-LSAT | DeepSeek-V3; DeepSeek-V3-0324 |
| Synthetic AGIEval seeded with AQUA-RAT, LogiQA, and AR-LSAT from Qwen3-30B-A3B | Text | 4.2B | AQUA-RAT; LogiQA; AR-LSAT | Qwen3-30B-A3B |
| Synthetic Art of Problem Solving from Qwen2.5-32B-Instruct, Qwen2.5-Math-72B, Qwen2.5-Math-7B, and Qwen2.5-72B-Instruct | Text | - | Art of Problem Solving; American Mathematics Competitions 8; American Mathematics Competitions 10; GSM8K; PRM800K | Qwen2.5-32B-Instruct; Qwen2.5-Math-72B; Qwen2.5-Math-7B; Qwen2.5-72B-Instruct |
| Synthetic MMLU Auxiliary Train from DeepSeek-R1 | Text | 0.5B | MMLU Auxiliary Train | DeepSeek-R1 |
| Synthetic Long Context Continued Post-Training Data from Papers and Permissible Books from Qwen2.5-72B-Instruct | Text | - | arXiv; National Institutes of Health ExPorter; BioRxiv; PMC Article; USPTO Backgrounds; peS2o; Global Regulation; CORE; PG-19; DOAB CC BY & CC BY-SA subset; NDLTD | Qwen2.5-72B-Instruct |
| Synthetic Common Crawl from Qwen3-30B-A3B and Mistral-Nemo-12B-Instruct | Text | 415.8B | Common Crawl | Qwen3-30B-A3B; Mistral-NeMo-12B-Instruct |
| Synthetic Multilingual Data from Common Crawl from Qwen3-30B-A3B | Text | - | Common Crawl | Qwen3-30B-A3B |
| Synthetic Multilingual Data from Wikimedia from Qwen3-30B-A3B | Text | - | Wikimedia | Qwen3-30B-A3B |
| Synthetic Math Data from Wikimedia from Nemotron-4-340B-Instruct | Text | - | - | Nemotron-4-340B-Instruct |
| Synthetic Common Crawl Code from phi-4 | Text | 427.9B | Common Crawl | phi-4 |
| Synthetic Scientific Coding from Qwen3-235B-A22B | Text | 1.2B | Wikimedia | Qwen3-235B-A22B |
| Tool Calling Data | Text | 26.2B | - | Qwen3-235B-A22B-2507; gpt-oss-120b |
| Synthetic Essential-Web from QwQ-32B | Text | 28.1B | Essential-Web | QwQ-32B |
| Translated Synthetic Crawl | Text | 389.9B | Common Crawl | Qwen3-30B-A3B |
| Translated Synthetic Wikipedia | Text | 7.9B | Wikimedia | Qwen3-30B-A3B |
| Synthetic Art of Problem Solving from gpt-oss-120b and Qwen2.5-32B-Instruct | Text | Undisclosed | Art of Problem Solving; American Mathematics Competitions 8; American Mathematics Competitions 10 | gpt-oss-120b; Qwen2.5-32B-Instruct |
| Synthetic Stack Exchange from gpt-oss-120b and Qwen2.5-32B-Instruct | Text | Undisclosed | Stack Exchange | gpt-oss-120b; Qwen2.5-32B-Instruct |
| Synthetic OpenCodeReasoning from DeepSeek-R1-0528 | Text | Undisclosed | OpenCodeReasoning | DeepSeek-R1-0528 |
| Synthetic HackerRank Coding from DeepSeek-R1-0528 | Text | Undisclosed | HackerRank Coding Dataset | DeepSeek-R1-0528 |
| Synthetic SWE-Gym from Qwen3-Coder-480B-A35B-Instruct | Text | Undisclosed | SWE-Gym | Qwen3-Coder-480B-A35B-Instruct |
| Synthetic Art of Problem Solving and Stack Exchange from gpt-oss-120b, Qwen2.5-32B-Instruct, and Goedel-Prover-V2-32B | Text | Undisclosed | Art of Problem Solving; American Mathematics Competitions 8; American Mathematics Competitions 10; Stack Exchange | gpt-oss-120b; Qwen2.5-32B-Instruct; Goedel-Prover-V2-32B |
| Synthetic Multilingual Science and Code data from DeepSeek-R1, DeepSeek-R1-0528, Qwen2.5-32B-Instruct, and Qwen3-235B-A22B, translated with Qwen2.5-32B-Instruct and Qwen2.5-14B-Instruct | Text | Undisclosed | Stack Exchange; SCP-116K; LIMO; TACO; Code Contest; Codeforces | DeepSeek-R1; DeepSeek-R1-0528; Qwen2.5-32B-Instruct; Qwen3-235B-A22B |
| Synthetic Safety from DeepSeek-R1-0528, gpt-oss-120b and Mixtral-8x7B-v0.1 | Text | Undisclosed | Nemotron Content Safety Dataset V2; Gretel Synthetic Safety Alignment Dataset; RedTeam-2K; Malicious Tasks; Nemotron-Personas-USA | DeepSeek-R1-0528; gpt-oss-120b; Mixtral-8x7B-v0.1 |
| Synthetic STEM from Qwen3-235B-A22B-Instruct-2507 and gpt-oss-120b | Text | Undisclosed | arXiv; National Institutes of Health ExPorter; BioRxiv; PMC Article; USPTO Backgrounds; peS2o; Global Regulation; CORE; PG-19; DOAB CC BY & CC BY-SA subset; NDLTD | Qwen3-235B-A22B-Instruct-2507; gpt-oss-120b |
| Synthetic KernelBook from DeepSeek-R1-0528 | Text | Undisclosed | KernelBook | DeepSeek-R1-0528 |
| Synthetic Tool Calling from Qwen3-235B-A22B-Thinking-2507 and Qwen3-Next-80B-A3B-Thinking | Text | Undisclosed | ToolBench; glaive-function-calling-v2; APIGen Function-Calling; Nemotron-Personas-USA | Qwen3-235B-A22B-Thinking-2507; Qwen3-Next-80B-A3B-Thinking |
| Synthetic Chat from gpt-oss-120b, Mixtral-8x22B-Instruct-v0.1, Qwen3-235B-A22B-Instruct-2507, and Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | C4; LMSYS-Chat-1M; ShareGPT; GSM8K; PRM800K; FinQA; WikiTableQuestions; Riddles; glaive-function-calling-v2; SciBench; tigerbot-kaggle-leetcodesolutions-en-2k; OpenBookQA; Advanced Reasoning Benchmark; Software Heritage; Khan Academy Math Keywords; WildChat-1M; Nemotron-Personas-USA | gpt-oss-120b; Mixtral-8x22B-Instruct-v0.1; Qwen3-235B-A22B-Instruct-2507; Qwen3-235B-A22B-Thinking-2507 |
| Synthetic Long Context from Qwen3-235B-A22B-Instruct-2507 | Text | Undisclosed | CORE; PG-19; DOAB CC BY & CC BY-SA subset; NDLTD | Qwen3-235B-A22B-Instruct-2507 |
| Synthetic Tool Use Interactive Agent from gpt-oss-120b, DeepSeek-R1-0528, Qwen3-32B, and Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | NVIDIA Internal | gpt-oss-120b; DeepSeek-R1-0528; Qwen3-32B; Qwen3-235B-A22B-Thinking-2507 |
| Synthetic STEM from Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | ICHO-IPH0; Physics Big; Scale HLE; OpenMathReasoning; OpenCodeReasoning | Qwen3-235B-A22B-Thinking-2507 |
| Synthetic DocFinQA and SWE-smith from Qwen3-Coder-480B-A35B-Instruct and Kimi-K2-Thinking | Text | Undisclosed | DocFinQA; SWE-smith | Qwen3-Coder-480B-A35B-Instruct; Kimi-K2-Thinking |
| Synthetic Math from gpt-oss-120b and Qwen2.5-32B-Instruct | Text | Undisclosed | - | gpt-oss-120b; Qwen2.5-32B-Instruct |
| Synthetic Essential-Web from gpt-oss-120b | Text | Undisclosed | Essential-Web | gpt-oss-120b |
| Synthetic Scale HLE from gpt-oss-120b | Text | Undisclosed | Scale HLE | gpt-oss-120b |
| Synthetic CDQuestions from gpt-oss-120b | Text | Undisclosed | CDQuestions | gpt-oss-120b |
| Synthetic Stack Exchange from gpt-oss-120b | Text | Undisclosed | Stack Exchange | gpt-oss-120b |
| Synthetic GPQA from gpt-oss-120b and Qwen2.5-32B-Instruct | Text | Undisclosed | Stack Exchange | gpt-oss-120b; Qwen2.5-32B-Instruct |
| Synthetic Vedantu from gpt-oss-120b | Text | Undisclosed | Vedantu | gpt-oss-120b |
| Synthetic SWE-Gym and R2E-Gym-Subset from Qwen3-Coder-480B-A35B-Instruct | Text | Undisclosed | SWE-Gym; R2E-Gym-Subset | Qwen3-Coder-480B-A35B-Instruct |
| Synthetic SWE-Gym from Qwen3-Coder-480B-A35B-Instruct | Text | Undisclosed | SWE-Gym | Qwen3-Coder-480B-A35B-Instruct |
| Synthetic SWE-Gym and R2E-Gym-Subset from DeepSeek-R1-0528 | Text | Undisclosed | SWE-Gym; R2E-Gym-Subset | DeepSeek-R1-0528 |
| Synthetic HelpSteer, LMSYS-Chat-1M, and Nemotron-Personas-USA from gpt-oss-120b, Qwen3-235B-A22B-Instruct-2507, and Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | HelpSteer2; HelpSteer3; LMSYS-Chat-1M; Nemotron-Personas-USA | gpt-oss-120b; Qwen3-235B-A22B-Instruct-2507; Qwen3-235B-A22B-Thinking-2507 |
| Synthetic Structured Outputs from Qwen3-30B-A3B-Instruct-2507, Qwen3-30B-A3B-Thinking-2507, Qwen3-235B-A22B-Instruct-2507, and Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | - | Qwen3-30B-A3B-Instruct-2507; Qwen3-30B-A3B-Thinking-2507; Qwen3-235B-A22B-Instruct-2507; Qwen3-235B-A22B-Thinking-2507 |
| Synthetic Search STEM MCQ from Qwen3-235B-A22B and DeepSeek-R1-0528 | Text | Undisclosed | - | Qwen3-235B-A22B; DeepSeek-R1-0528 |
| Synthetic Search STEM OPENQ from DeepSeek-R1-0528 | Text | Undisclosed | - | DeepSeek-R1-0528 |
| Synthetic OpenSTEM from Qwen2.5-32B-Instruct and DeepSeek-R1-0528 | Text | Undisclosed | - | Qwen2.5-32B-Instruct; DeepSeek-R1-0528 |
| Synthetic MCQ from Qwen2.5-32B-Instruct and DeepSeek-R1-0528 | Text | Undisclosed | - | Qwen2.5-32B-Instruct; DeepSeek-R1-0528 |
| Synthetic MCQ10 from DeepSeek-R1-0528 | Text | Undisclosed | - | DeepSeek-R1-0528 |
| Synthetic MCQ4 from Qwen3-235B-A22B, DeepSeek-R1-0528, and Qwen3-235B-A22B-Instruct-2507 | Text | Undisclosed | - | Qwen3-235B-A22B; DeepSeek-R1-0528; Qwen3-235B-A22B-Instruct-2507 |
| Synthetic OpenMathReasoning from gpt-oss-120b and Qwen2.5-32B-Instruct | Text | Undisclosed | OpenMathReasoning | gpt-oss-120b; Qwen2.5-32B-Instruct |
| Synthetic Offline Search MCQA HLE from DeepSeek-R1-0528 | Text | Undisclosed | - | DeepSeek-R1-0528 |
| Synthetic Offline Search MCQA GPQA from Qwen3-235B-A22B and DeepSeek-R1-0528 | Text | Undisclosed | - | Qwen3-235B-A22B; DeepSeek-R1-0528 |
| Synthetic Human Preference from QwQ-32B, Qwen3-30B-A3B, Qwen3-235B-A22B, Qwen3-235B-A22B-Instruct-2507, Mistral-Small-3.1-24B-Instruct-2503, Mistral-Small-3.2-24B-Instruct-2506, MiniMax-M1-80k, MiniMax-M1-40k, Kimi-K2-Instruct, DeepSeek-V3-0324, DeepSeek-R1-0528 | Text | Undisclosed | — | QwQ-32B; Qwen3-30B-A3B; Qwen3-235B-A22B; Qwen3-235B-A22B-Instruct-2507; Mistral-Small-3.1-24B-Instruct-2503; Mistral-Small-3.2-24B-Instruct-2506; MiniMax-M1-80k; MiniMax-M1-40k; Kimi-K2-Instruct; DeepSeek-V3-0324; DeepSeek-R1-0528 |
| Synthetic WildChat-1M and arena-human-preference-140k from DeepSeek-R1, gemma-2-2b-it, gemma-3-27b-it, gpt-oss-20b, gpt-oss-120b, Mistral-7B-Instruct-v0.3, Mixtral-8x22B-Instruct-v0.1, Nemotron-4-340B-Instruct, NVIDIA-Nemotron-Nano-9B-v2, Phi-4-mini-instruct, Phi-3-small-8k-instruct, Phi-3-medium-4k-instruct, Qwen3-235B-A22B, QwQ-32B | Text | Undisclosed | WildChat-1M; arena-human-preference-140k | DeepSeek-R1; gemma-2-2b-it; gemma-3-27b-it; gpt-oss-20b; gpt-oss-120b; Mistral-7B-Instruct-v0.3; Mixtral-8x22B-Instruct-v0.1; Nemotron-4-340B-Instruct; NVIDIA-Nemotron-Nano-9B-v2; Phi-4-mini-instruct; Phi-3-small-8k-instruct; Phi-3-medium-4k-instruct; Qwen3-235B-A22B; QwQ-32B |
| Synthetic Safety from DeepSeek-R1-0528, gpt-oss-120b, DeepSeek-R1-Distill-Qwen-7B, and Mixtral-8x7B-v0.1 | Text | Undisclosed | Nemotron Content Safety Dataset V2; Gretel Synthetic Safety Alignment Dataset; RedTeam-2K; Malicious Tasks | DeepSeek-R1-0528; gpt-oss-120b; DeepSeek-R1-Distill-Qwen-7B; Qwen3-30B-A3B-Thinking-2507; Qwen3-235B-A22B-Instruct-2507; Mixtral-8x7B-v0.1 |
| Synthetic Code from Qwen3-32B | Text | Undisclosed | English Common Crawl; English Common Crawl 1.1 | Qwen3-32B |
| Synthetic OpenCodeReasoning from DeepSeek-R1 | Text | Undisclosed | OpenCodeReasoning | DeepSeek-R1 |
| Synthetic LIMO from DeepSeek-R1-0528 | Text | Undisclosed | LIMO | DeepSeek-R1-0528 |
| Synthetic SCP from DeepSeek-R1-0528 | Text | Undisclosed | SCP-116K | DeepSeek-R1-0528 |
| Synthetic Stack Exchange from DeepSeek-R1-0528 | Text | Undisclosed | Stack Exchange | DeepSeek-R1-0528 |
| Synthetic Common Crawl from Qwen3-30B-A3B | Text | Undisclosed | Common Crawl | Qwen3-30B-A3B |
| Synthetic Wikipedia from Qwen3-30B-A3B | Text | Undisclosed | Wikimedia | Qwen3-30B-A3B |
| Synthetic Essential-Web from Qwen3-30B-A3B and Qwen3-235B-A22B-Thinking-2507 | Text | Undisclosed | Essential-Web | Qwen3-30B-A3B; Qwen3-235B-A22B-Thinking-2507 |
| Synthetic Textbook Math from Qwen3-30B-A3B, Qwen3-235B-A22B, phi-4 | Text | Undisclosed | Common Crawl; FineMath | Qwen3-30B-A3B; Qwen3-235B-A22B; phi-4 |
| Synthetic Math and Code from DeepSeek-R1 and DeepSeek-R1-0528 | Text | Undisclosed | Magicoder-Evol-Instruct-110K; opc-sft-stage2; TACO; OpenCodeReasoning; OpenMathReasoning; NuminaMath CoT | DeepSeek-R1; DeepSeek-R1-0528 |
| Synthetic Nemotron-Personas-USA from gpt-oss-120b and Qwen3-8B | Text | Undisclosed | Nemotron-Personas-USA | gpt-oss-120b; Qwen3-8B |
Training Dataset Token Counts
| Dataset | # of Tokens in Nemotron Nano 2 | # of Tokens in Nemotron Nano 3 |
|---|---|---|
| English Common Crawl | 3,360,110,334,818 | 3,456,523,212,210 |
| English Synthetic CC | 1,949,464,641,123 | 4,340,740,677,920 |
| Crawl++ | 360,389,153,262 | 360,389,153,262 |
| Math | 124,606,230,663 | 154,217,502,165 |
| Synthetic Math | 73,007,767,155 | 73,007,767,155 |
| Code | 747,409,228,724 | 1,043,856,922,136 |
| Synthetic Code | 175,067,553,293 | 453,117,917,176 |
| Common Crawl Code | 0 | 263,072,374,097 |
| English Wiki | 17,349,266,926 | 17,349,266,926 |
| Synthetic Wiki | 0 | 7,850,648,552 |
| Books | 0 | 0 |
| Papers | 191,586,493,365 | 191,586,493,365 |
| PDF-to-text | 141,096,578,533 | 141,096,578,533 |
| Code SFT | 60,025,726,817 | 102,863,752,325 |
| STEM SFT | 272,680,426,295 | 359,826,214,274 |
| General SFT | 6,057,478,645 | 6,057,478,645 |
| Tool-Calling SFT | 0 | 26,244,716,867 |
| Multilingual | 2,172,261,909,350 | 1,743,892,490,859 |
| Synthetic multilingual | 997,710,364,950 | 595,140,661,135 |
| Total | 10,648,823,153,919 | 13,336,833,827,602 |
| Language | Total Tokens |
|---|---|
| Arabic | 118,056,362,726 |
| Danish | 117,747,321,618 |
| German | 146,613,691,781 |
| Spanish | 469,156,575,409 |
| French | 139,982,002,289 |
| Italian | 298,858,370,174 |
| Japanese | 682,755,693,336 |
| Korean | 127,099,747,538 |
| Dutch | 89,041,592,681 |
| Polish | 105,356,493,147 |
| Portuguese | 243,249,275,089 |
| Russian | 185,314,014,057 |
| Swedish | 74,954,953,299 |
| Thai | 160,778,944,467 |
| Chinese | 211,007,236,689 |
| Language | Tokens |
|---|---|
| Assembly | 750,628,764 |
| C | 42,657,300,868 |
| C# | 56,153,329,307 |
| C++ | 67,773,701,658 |
| CommonLisp | 263,234,672 |
| CSS | 38,848,760,035 |
| Cuda | 400,222,993 |
| Dart | 3,816,960,470 |
| Dockerfile | 474,958,084 |
| Fortran | 1,105,049,387 |
| Go | 8,332,419,480 |
| Haskell | 1,294,613,669 |
| HTML | 69,082,117,487 |
| Java | 131,440,465,822 |
| JavaScript | 75,573,420,861 |
| JSON | 15,366,881,241 |
| Julia | 621,046,949 |
| JupyterNotebook | 2,241,893,197 |
| Lua | 4,146,420,802 |
| Makefile | 12,640,010,879 |
| Markdown | 64,796,743,311 |
| Mathematica | 320,504,225 |
| OmniversePython | 26,946,093 |
| Pascal | 1,625,013,876 |
| Perl | 1,575,314,434 |
| PHP | 61,575,339,005 |
| Python | 126,916,727,384 |
| R | 19,811,381,935 |
| reStructuredText | 1,779,876,391 |
| Ruby | 6,446,962,615 |
| Rust | 4,438,640,533 |
| Scala | 3,343,959,154 |
| Shell | 18,758,779,250 |
| SQL | 23,205,633,085 |
| Swift | 5,976,714,881 |
| SystemVerilog | 233,056,185 |
| TeX | 7,347,157,527 |
| TypeScript | 15,657,838,582 |
| Verilog | 811,884,369 |
| VHDL | 648,401,444 |
| VisualBasic.NET | 1,005,680,881 |
| XML | 12,616,779,741 |
| YAML | 10,574,010,491 |
Language Distribution in Post-Training
For our post-training recipe, we focused on 5 main languages in addition to English: Spanish, French, Japanese, Italian, German. Those languages were represented in the form of multilingual reasoning and translation task. The following table depicts our sample distribution for the 6 languages and 5 translation pairs.| Language / Translation Pair | Size |
|---|---|
| English | 16.2M |
| Italian | 0.252M |
| German | 0.252M |
| Spanish | 0.252M |
| French | 0.252M |
| Japanese | 0.252M |
| English ↔ Italian | 108k |
| English ↔ German | 108k |
| English ↔ Spanish | 108k |
| English ↔ French | 108k |
| English ↔ Japanese | 108k |
Evaluation Dataset
| Property | Value |
|---|---|
| Data Collection Method | Hybrid: Human, Synthetic |
| Labeling Method | Hybrid: Automated, Human, Synthetic |
We evaluated our model on the following benchmarks:
| Task | NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 | Qwen3-30B-A3B-Thinking-2507 | GPT-OSS-20B |
|---|---|---|---|
| General Knowledge | |||
| MMLU-Pro | 78.3 | 80.9 | 75.0 |
| Reasoning | |||
| AIME25 (no tools) | 89.1 | 85.0 | 91.7 |
| AIME25 (with tools) | 99.2 | - | 98.7 |
| GPQA (no tools) | 73.0 | 73.4 | 71.5 |
| GPQA (with tools) | 75.0 | - | 74.2 |
| LiveCodeBench (v6 2025-08-2025-05) | 68.3 | 66.0 | 61.0 |
| SciCode (subtask) | 33.3 | 33.0 | 34.0 |
| HLE (no tools) | 10.6 | 9.8 | 10.9 |
| HLE (with tools) | 15.5 | - | 17.3 |
| MiniF2F pass@1 | 50.0 | 5.7 | 12.1 |
| MiniF2F pass@32 | 79.9 | 16.8 | 43.0 |
| Agentic | |||
| Terminal Bench (hard subset) | 8.5 | 5.0 | 6.0 |
| SWE-Bench (OpenHands) | 38.8 | 22.0 | 34.0 |
| TauBench V2 (Airline) | 48.0 | 58.0 | 38.0 |
| TauBench V2 (Retail) | 56.9 | 58.8 | 38.0 |
| TauBench V2 (Telecom) | 42.2 | 26.3 | 49.7 |
| TauBench V2 (Average) | 49.0 | 47.7 | 48.7 |
| BFCL v4 | 53.8 | 46.4* | - |
| Chat & Instruction Following | |||
| IFBench (prompt) | 71.5 | 51.0 | 65.0 |
| Scale AI Multi Challenge | 38.5 | 44.8 | 33.8 |
| Arena-Hard-V2 (Hard Prompt) | 72.1 | 49.6* | 71.2* |
| Arena-Hard-V2 (Creative Writing) | 63.2 | 66.0* | 25.9& |
| Arena-Hard-V2 (Average) | 67.7 | 57.8 | 48.6 |
| Long Context | |||
| AA-LCR | 35.9 | 59.0 | 34.0 |
| RULER-100@256k | 92.9 | 89.4 | - |
| RULER-100@512k | 91.3 | 84.0 | - |
| RULER-100@1M | 86.3 | 77.5 | - |
| Multilingual | |||
| MMLU-ProX (avg over langs) | 59.5 | 77.6* | 69.1* |
| WMT24++ (en->xx) | 86.2 | 85.6 | 83.2 |
Model Specifications
LicenseCustom
Last UpdatedJanuary 2026
Input TypeText
Output TypeText
ProviderNvidia
Languages6 Languages