Kimi K2.5
Version: 1
Fireworks on Foundry
Models available for use with Fireworks on Foundry deliver optimized, best-in-class performance on the Fireworks Inference Cloud. Fireworks on Foundry is a Preview subject to Azure Preview terms and the following supplemental terms: When you use Fireworks on Foundry, data is shared between Microsoft and Fireworks AI, and different compliance and data handling rules will apply. See documentation for details. Customers are responsible for evaluating whether data sharing between Microsoft and Fireworks is appropriate for their organization's compliance requirements.Key capabilities
About this model
Kimi K2.5 is Moonshot AI's flagship agentic model and a new SOTA open model. It is an open-source, native multimodal agentic model built through continual pretraining on approximately 15 trillion mixed visual and text tokens atop Kimi-K2-Base. It unifies vision and text, thinking and non-thinking modes, and single-agent and multi-agent execution into one model. Users can control the reasoning behavior of the Kimi K2.5 model and inspect its reasoning history for greater transparency.Key model capabilities
- Native multimodal: unifies vision and text understanding
- Dual reasoning modes: instant (non-thinking) and thinking
- Agent swarm: supports single-agent and multi-agent execution
- 256K token context window
- Function calling and tool use
Use cases
See Responsible AI for additional considerations for responsible use.Key use cases
- Native Multimodality
- Agent Swarm
- Coding with Vision
Out of scope use cases
The provider has not supplied this information.Pricing
Pricing is based on a number of factors, including deployment type and tokens used. See pricing details here.Technical specs
The provider has not supplied this information.Training cut-off date
The provider has not supplied this information.Training time
The provider has not supplied this information.Input formats
TextOutput formats
TextSupported languages
EnglishSample JSON response
The provider has not supplied this information.Model architecture
Kimi K2.5 is built on the Kimi K2 base, a Mixture-of-Experts (MoE) language model with 1 trillion total parameters and 32 billion activated parameters per forward pass, with continual pretraining on approximately 15 trillion mixed visual and text tokens.| Property | Value |
|---|---|
| Architecture | Mixture-of-Experts (MoE) |
| Total Parameters | 1T |
| Activated Parameters | 32B |
| Number of Layers (Dense layer included) | 61 |
| Number of Dense Layers | 1 |
| Attention Hidden Dimension | 7168 |
| MoE Hidden Dimension (per Expert) | 2048 |
| Number of Attention Heads | 64 |
| Number of Experts | 384 |
| Selected Experts per Token | 8 |
| Number of Shared Experts | 1 |
| Vocabulary Size | 160K |
| Context Length | 256K |
| Attention Mechanism | MLA |
| Activation Function | SwiGLU |
Long context
Context Length: 256KOptimizing model performance
The provider has not supplied this information.Additional assets
The provider has not supplied this information.Training disclosure
Training, testing and validation
The provider has not supplied this information.Distribution
Distribution channels
The provider has not supplied this information.More information
The provider has not supplied this information.Model Specifications
Context Length262144
LicenseOther
Last UpdatedApril 2026
Input TypeText
Output TypeText
ProviderFireworks
Languages1 Language