Grok Code Fast 1
Grok Code Fast 1
Version: 1
xAILast updated December 2025
Grok Code Fast 1 is a fast, economical AI model for agentic coding, built from scratch with a new architecture, trained on programming-rich data, and fine-tuned for real-world coding tasks like bug fixes and project setup.
Coding
Agents
Low latency

Direct from Azure models

Direct from Azure models are a select portfolio curated for their market-differentiated capabilities:
  • Secure and managed by Microsoft: Purchase and manage models directly through Azure with a single license, consistent support, and no third-party dependencies, backed by Azure's enterprise-grade infrastructure.
  • Streamlined operations: Benefit from unified billing, governance, and seamless PTU portability across models hosted on Azure - all part of Microsoft Foundry.
  • Future-ready flexibility: Access the latest models as they become available, and easily test, deploy, or switch between them within Microsoft Foundry; reducing integration effort.
  • Cost control and optimization: Scale on demand with pay-as-you-go flexibility or reserve PTUs for predictable performance and savings.
Learn more about Direct from Azure models .

Key capabilities

About this model

Post-training focused on aligning the model for practical

Responsible AI considerations

Safety techniques

Post-training alignment used high-quality datasets reflecting real-world coding tasks, such as pull requests and bug fixes, to enhance practical utility. Safety alignment targeted reliability and usability, with human evaluations by experienced developers to refine behavior in agentic workflows. Techniques included supervised fine-tuning and reinforcement learning to ensure accurate code generation and tool use, with a focus on minimizing errors in iterative coding scenarios. Safety objectives included preventing disallowed content (e.g., harmful or copyrighted code) and ensuring compliance with developer workflows. The model may produce errors in complex coding scenarios, requiring developer verification for critical applications. It is optimized for English and major programming languages, potentially underperforming in niche or non-English contexts. Risks include generating incomplete or incorrect code, mitigated by encouraging

Quality and performance evaluations

Source: xAI Grok Code Fast 1 scored 70.8% on SWE-Bench Verified (internal harness), competitive with smaller models like GPT-5-nano but trailing larger models in complex reasoning. It excels in coding accuracy (93.0%) and instruction following (75.0%), with 100% reliability across seven benchmarks. Human evaluations prioritized developer experience in agentic workflows, complementing benchmarks like SWE-Bench. Limitations include reduced accuracy in complex tasks, mitigated by encouraging iterative prompting. The model's speed (up to 160 tokens/second) outperforms rivals like Claude Sonnet in coding efficiency.

Benchmarking methodology

Source: xAI Benchmarking used SWE-Bench Verified with standardized prompts for fair comparison. Human evaluations supplemented quantitative metrics, focusing on real-world coding tasks. No prompt adaptations were allowed to ensure consistency. Further details on methodology are not publicly available. Post-tr
Model Specifications
Context Length256000
Quality Index0.82
LicenseCustom
Last UpdatedDecember 2025
Input TypeText
Output TypeText
ProviderxAI
Languages1 Language