Grok 4
Version: 1
Azure Direct Models
Direct from Azure models are a select portfolio curated for their market-differentiated capabilities:- Secure and managed by Microsoft: Purchase and manage models directly through Azure with a single license, consistent support, and no third-party dependencies, backed by Azure's enterprise-grade infrastructure.
- Streamlined operations: Benefit from unified billing, governance, and seamless PTU portability across models hosted on Azure - all as part of one Azure AI Foundry platform.
- Future-ready flexibility: Access the latest models as they become available, and easily test, deploy, or switch between them within Azure AI Foundry; reducing integration effort.
- Cost control and optimization: Scale on demand with pay-as-you-go flexibility or reserve PTUs for predictable performance and savings.
Key capabilities
About this model
Grok 4 is the latest reasoning model from xAI with advanced reasoning and tool-use capabilities, enabling it to achieve new state-of-the-art performance across challenging academic and industry benchmarks.Key model capabilities
The provider has not supplied this information.Use cases
See Responsible AI for additional considerations for responsible use.Key use cases
The provider has not supplied this information.Out of scope use cases
Microsoft's safety and responsible AI evaluations found Grok-4 to be less aligned than other models evaluated and offered through Azure Direct resulting in (i) higher risks that the model will produce potentially harmful content and (ii) lower scores on safety and jailbreak benchmarks. Grok-4 may be capable of producing explicit content, and may do so with a higher propensity than other models. Customers should use both system safety messages and Azure AI Content Safety (AACS) service to manage model behavior and comply with the Microsoft Enterprise AI Services Code of Conduct . Additionally, there may be categories of harm this model can produce that are not covered by Azure AI Content Safety. Accordingly, customers should conduct their own evaluations before deploying Grok-4 in production systems.Pricing
Pricing is based on a number of factors, including deployment type and tokens used. See pricing details here.Technical specs
The provider has not supplied this information.Training cut-off date
The provider has not supplied this information.Training time
The provider has not supplied this information.Input formats
The provider has not supplied this information.Output formats
The provider has not supplied this information.Supported languages
EnglishSample JSON response
The provider has not supplied this information.Model architecture
The provider has not supplied this information.Long context
The provider has not supplied this information.Optimizing model performance
The provider has not supplied this information.Additional assets
Grok 4 benchmarks and system evaluations are comprehensively included in the xAI Grok 4 model card . Review the xAI model card for additional information on system evaluations, expected behavior, and safety systems.Training disclosure
Training, testing and validation
The provider has not supplied this information.Distribution
Distribution channels
The provider has not supplied this information.More information
The provider has not supplied this information.Responsible AI considerations
Safety techniques
Because models developed by xAI push the frontier of AI capabilities, xAI is committed to mitigating their risks through both evaluating model behaviors and implementing safeguards.Safety evaluations
Microsoft's safety and responsible AI evaluations found Grok-4 to be less aligned than other models evaluated and offered through Azure Direct resulting in (i) higher risks that the model will produce potentially harmful content and (ii) lower scores on safety and jailbreak benchmarks. Grok 4 benchmarks and system evaluations are comprehensively included in the xAI Grok 4 model card . We have also made safety benchmarks available for this model in the model card benchmark tab and the tradeoff chart.Known limitations
Grok-4 may be capable of producing explicit content, and may do so with a higher propensity than other models. Additionally, there may be categories of harm this model can produce that are not covered by Azure AI Content Safety. Accordingly, customers should conduct their own evaluations before deploying Grok-4 in production systems.Acceptable use
Acceptable use policy
Review the xAI model card for additional information on system evaluations, expected behavior, and safety systems. Customers should use both system safety messages and Azure AI Content Safety (AACS) service to manage model behavior and comply with the Microsoft Enterprise AI Services Code of Conduct .Quality and performance evaluations
Source: xAI Grok 4 benchmarks and system evaluations are comprehensively included in the xAI Grok 4 model card . Grok 4 is the latest reasoning model from xAI with advanced reasoning and tool-use capabilities, enabling it to achieve new state-of-the-art performance across challenging academic and industry benchmarks. Microsoft's safety and responsible AI evaluations found Grok-4 to be less aligned than other models evaluated and offered through Azure Direct resulting in (i) higher risks that the model will produce potentially harmful content and (ii) lower scores on safety and jailbreak benchmarks. We have also made safety benchmarks available for this model in the model card benchmark tab and the tradeoff chart.Benchmarking methodology
Source: xAI The provider has not supplied this information.Public data summary
Source: xAI The provider has not supplied this information.Model Specifications
Context Length256000
Quality Index0.91
LicenseCustom
Training DataDecember 2024
Last UpdatedNovember 2025
Input TypeText,Image
Output TypeText
ProviderxAI
Languages1 Language
Related Models