Cosmos-reason1-NIM-microservice
Version: 1
Description
NVIDIA Cosmos Reason – an open, customizable, 7B-parameter reasoning vision language model (VLM) for physical AI and robotics - enables robots and vision AI agents to reason like humans, using prior knowledge, physics understanding and common sense to understand and act in the real world. This model understands space, time, and fundamental physics, and can serve as a planning model to reason what steps an embodied agent might take next. Cosmos Reason excels at navigating the long tail of diverse scenarios of the physical world with spatial-temporal understanding. Cosmos Reason is post-trained with physical common sense and embodied reasoning data with supervised fine-tuning and reinforcement learning. It uses chain-of-thought reasoning capabilities to understand world dynamics without human annotations. Given a video and a text prompt, the model first converts the video into tokens using a vision encoder and a special translator called a projector. These video tokens are combined with the text prompt and fed into the core model, which uses a mix of LLM modules and techniques. This enables the model to think step-by-step and provide detailed, logical responses. Cosmos Reason can be used for robotics and physical AI applications including: Data curation and annotation — Enable developers to automate high-quality curation and annotation of massive, diverse training datasets. Robot planning and reasoning — Act as the brain for deliberate, methodical decision-making in a robot vision language action (VLA) model. Now robots such as humanoids and autonomous vehicles can interpret environments and given complex commands, break them down into tasks and execute them using common sense, even in unfamiliar environments. Video analytics AI agents — Extract valuable insights and perform root-cause analysis on massive volumes of video data. These agents can be used to analyze and understand recorded or live video streams across city and industrial operations. The model is ready for commercial use. Model Developer: NVIDIA NVIDIA AI Enterprise NVIDIA AI Enterprise is an end-to-end, cloud-native software platform that accelerates data science pipelines and streamlines development and deployment of production-grade co-pilots and other generative AI applications. Easy-to-use microservices provide optimized model performance with enterprise-grade security, support, and stability to ensure a smooth transition from prototype to production for enterprises that run their businesses on AI.Deployment Geography:
GlobalUse Case:
Robotics engineers and AI researchers developing embodied agents—such as autonomous vehicles and robotic systems—would use Physical AI to equip their machines with spatiotemporal reasoning and a fundamental physics understanding for navigation, manipulation, and decision-making tasks Reference: Technical PaperRelease Date:
Build.NVIDIA.com 08/11/2025 via linkHuggingface 08/01/2025 via link
Model Architecture:
Architecture Type: A Multi-modal LLM consists of a Vision Transformer (ViT) for vision encoder and a Dense Transformer model for LLM.\ Network Architecture: Qwen2.5-VL-7B-Instruct.\ Cosmos-Reason-7B is post-trained based on Qwen2.5-VL-7B-Instruct and follows the same model architecture. Number of model parameters: Cosmos-Reason1-7B:- Vision Transformer (ViT): 675.76M (675,759,104)
- Language Model (LLM): 7.07B (7,070,619,136)
- Other components (output projection layer): 545.00M (544,997,376)
Computational Load:
- Cumulative Compute: 3.2603016e+21 FLOPS
- Estimated Energy and Emissions for Model Training:
- Total kWh = 16658432
- Total Emissions (tCO2e) = 5380.674
Input:
Input Type: Text, ImageInput Parameters: Text: One Dimensional (1D), Video (3D)
Input Format: Text, Video
Other Properties Related to Input:
- Use FPS=4 for input video to match the training setup.
- Append
Answer the question in the following format: \nyour reasoning\n\n\n\nyour answer\n. in the system prompt to encourage long chain-of-thought reasoning response.
Output:
Output Type: TextOutput Parameters: One Dimensional (1D)
Output Format: String
Other Output Properties: Recommend using 4096 or more output max tokens to avoid truncation of long chain-of-thought response. Our AI model recognizes timestamps added at the bottom of each frame for accurate temporal localization. Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA's hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.__
Software Integration:
Runtime Engines:
- vLLM
Supported Hardware Platforms:
- NVIDIA Ampere
- NVIDIA Hopper
Training, Testing, and Evaluation Datasets:
05/17/2025 Please see our technical paper for detailed evaluations on physical common sense and embodied reasoning. Part of the evaluation datasets are released under Cosmos-Reason1-Benchmark . The embodied reasoning datasets and benchmarks focus on the following areas: robotics (RoboVQA, BridgeDataV2, Agibot, RobFail), ego-centric human demonstration (HoloAssist), and Autonomous Vehicle (AV) driving video data. The AV dataset is collected and annotated by NVIDIA. All datasets go through the data annotation process described in the technical paper to prepare training and evaluation data and annotations. 08/01/2025 We enhance the model capability with the augmented training data. PLM-Video-Human and Nexar are used to enable dense temporal captioning. Describe Anything is added to enhance a set of mark (SoM) prompting. We enrich data in intelligent transportation systems (ITS) and warehouse applications. Lastly, Visual Critics dataset contains a collection of AI generated videos from Cosmos-Predict2 and Wan2.1 with human annotations to describe the physical correctness in AI videos.Training Dataset:
- Data Modalities: Video, Text
- Text Training Data Size: Not specified (included in multimodal inputs)
- Video Training Data Size: Less than 10,000 Hours (~2M video-text pairs in SFT + RL datasets)
Data Collection Method by dataset:
- RoboVQA: Hybrid: Automatic/Sensors
- BridgeDataV2: Automatic/Sensors
- AgiBot: Automatic/Sensors
- RoboFail: Automatic/Sensors
- HoloAssist: Human
- AV: Automatic/Sensors
- PLM-Video-Human: Human
- Nexar: Automatic/Sensors
- Describe Anything: Human
- ITS / Warehouse: Human, Automatic
- Visual Critics: Automatic
Labeling Method by dataset:
- RoboVQA: Hybrid: Human,Automated
- BridgeDataV2: Hybrid: Human,Automated
- AgiBot: Hybrid: Human,Automated
- RoboFail: Hybrid: Human,Automated
- HoloAssist: Hybrid: Human,Automated
- AV: Hybrid: Human,Automated
- PLM-Video-Human: Human,Automated
- Nexar: Human
- Describe Anything: Human,Automated
- ITS / Warehouse: Human, Automated
- Visual Critics: Human,Automated
Evaluation Dataset:
- RoboVQA: Hybrid: Automatic/Sensors
- BridgeDataV2: Automatic/Sensors
- AgiBot: Automatic/Sensors
- RoboFail: Automatic/Sensors
- HoloAssist: Human
- AV: Automatic/Sensors
Labeling Method by dataset:
- RoboVQA: Hybrid: Human,Automated
- BridgeDataV2: Hybrid: Human,Automated
- AgiBot: Hybrid: Human,Automated
- RoboFail: Hybrid: Human,Automated
- HoloAssist: Hybrid: Human,Automated
- AV: Hybrid: Human,Automated
Evaluation Benchmark Results:
We report the model accuracy on the embodied reasoning benchmark introduced in Cosmos-Reason1. The results differ from those presented in Table 9 due to additional training aimed at supporting a broader range of Physical AI tasks beyond the benchmark.Evaluation Results
| Dataset | RoboVQA | AV | BridgeDataV2 | Agibot | HoloAssist | RoboFail | Average |
|---|---|---|---|---|---|---|---|
| Accuracy | 87.3 | 70.8 | 63.7 | 48.9 | 62.7 | 57.2 | 65.1 |
Dataset Format
Modality: Video (mp4) and TextDataset Quantification
05/17/2025 We release the embodied reasoning data and benchmarks. Each data sample is a pair of video and text. The text annotations include understanding and reasoning annotations described in the Cosmos-Reason1 paper. Each video may have multiple text annotations. The quantity of the video and text pairs is described in the table below. The AV data is currently unavailable and will be uploaded soon!| Dataset | RoboVQA | AV | BridgeDataV2 | Agibot | HoloAssist | RoboFail | Storage Size |
|---|---|---|---|---|---|---|---|
| SFT Data | 1.14m | 24.7k | 258k | 38.9k | 273k | N/A | 300.6GB |
| RL Data | 252 | 200 | 240 | 200 | 200 | N/A | 2.6GB |
| Benchmark Data | 110 | 100 | 100 | 100 | 100 | 100 | 1.5GB |
Inference:
Test Hardware: H100Note: We suggest using fps=4 for the input video and max_tokens=4096 to avoid truncated response.
Python from transformers import AutoProcessor
from vllm import LLM, SamplingParams
from qwen_vl_utils import process_vision_info\ #You can also replace the MODEL_PATH by a safetensors folder path mentioned above
MODEL_PATH = "nvidia/Cosmos-Reason1-7B"\ llm = LLM(
>model=MODEL_PATH,
limit_mm_per_prompt={"image": 10, "video": 10},
)
Sampling Parameters Configuration
Python sampling_params = SamplingParams(\temperature=0.6,
top_p=0.95,
repetition_penalty=1.05,
max_tokens=4096,
)
Video Message Setup
Python
video_messages = [\{"role": "system", "content": "You are a helpful assistant. Answer the question in the following format: \nyour reasoning\n\n\n\nyour answer\n."},
{"role": "user", "content": [
{"type": "text", "text": (
"Is it safe to turn right?"
)
},
{
"type": "video_url",\
"video_url": {"url": BASE_64_ENCODED_VIDEO},
"fps": 4,
}
]
},
]
Key Inference Notes
- FPS Requirement: Use FPS=4 for input video to match the training setup
- System Prompt: Append "Answer the question in the following format: \nyour reasoning\n\n\n\nyour answer\n." in the system prompt to encourage long chain-of-thought reasoning response
- Output Tokens: Recommend using 4096 or more output max tokens to avoid truncation of long chain-of-thought response
- Hardware: Model is designed and optimized to run on NVIDIA GPU-accelerated systems
- Precision: Tested with BF16 precision for inference
- Operating System: Tested on Linux (other operating systems not tested)
Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here.Model Specifications
LicenseCustom
Last UpdatedOctober 2025
Input TypeVideo,Image,Text
Output TypeText
ProviderNvidia
Languages13 Languages