Mistral Small
Mistral Small
Version: 1
Mistral AILast updated May 2024
Mistral Small can be used on any language-based task that requires high efficiency and low latency.
Low latency
Multilingual
Mistral Small is Mistral AI's most efficient Large Language Model (LLM). It can be used on any language-based task that requires high efficiency and low latency. Mistral Small is:
  • A small model optimized for low latency. Very efficient for high volume and low latency workloads. Mistral Small is Mistral's smallest proprietary model, it outperforms Mixtral 8x7B and has lower latency.
  • Specialized in RAG. Crucial information is not lost in the middle of long context windows (up to 32K tokens).
  • Strong in coding. Code generation, review and comments. Supports all mainstream coding languages.
  • Multi-lingual by design. Best-in-class performance in French, German, Spanish, and Italian - in addition to English. Dozens of other languages are supported.
  • Responsible AI. Efficient guardrails baked in the model, with additional safety layer with safe_mode option

Resources

For full details of this model, please read release blog post .

Content Filtering

Prompts and completions are passed through a default configuration of Azure AI Content Safety classification models to detect and prevent the output of harmful content. Learn more about Azure AI Content Safety . Configuration options for content filtering vary when you deploy a model for production in Azure AI; learn more .
Model Specifications
Context Length32768
LicenseCustom
Training DataMar 2023
Last UpdatedMay 2024
Input TypeText
Output TypeText
PublisherMistral AI
Languages5 Languages