Ministral 3B
Ministral 3B
Version: 1
Mistral AILast updated October 2024
Ministral 3B is a state-of-the-art Small Language Model (SLM) optimized for edge computing and on-device applications. As it is designed for low-latency and compute-efficient inference, it it also the perfect model for standard GenAI applications that have
Low latency
Agents
Reasoning
Ministral 3B is a state-of-the-art Small Language Model (SLM) optimized for edge computing and on-device applications. As it is designed for low-latency and compute-efficient inference, it it also the perfect model for standard GenAI applications that have real-time requirements and high-volume. Number of Parameters: 3,6 billions Ministral 3B and Ministral 8B set a new frontier in knowledge, commonsense, reasoning, function-calling, and efficiency in the sub-10B category, and can be used or tuned to a variety of uses, from orchestrating agentic workflows to creating specialist task workers. Both models support up to 128k context length (currently 32k on vLLM) and Ministral 8B has a special interleaved sliding-window attention pattern for faster and memory-efficient inference.

Use cases

Our most innovative customers and partners have increasingly been asking for local, privacy-first inference for critical applications such as on-device translation, internet-less smart ass

Content Filtering

Prompts and completions are passed through a default configuration of Azure AI Content Safety classification models to detect and prevent the output of harmful content. Learn more about Azure AI Content Safety . Configuration options for content filtering vary when you deploy a model for production in Azure AI; learn more .
Source: Un Ministral, des Ministraux - Introducing the world’s best edge models. We demonstrate the performance of les Ministraux across multiple tasks where they consistently outperform their peers. We re-evaluated all models with our internal framework for fair comparison.

Pretrained Models

Knowledge & Commonsense

ModelMMLUAGIEvalWinograndeArc-cTriviaQA
Gemma 2 2B52.433.868.742.647.8
Llama 3.2 3B56.237.459.643.150.7
Ministral 3B60.942.172.764.256.7
Mistral 7B62.442.574.267.962.5
Llama 3.1 8B64.744.474.646.060.2
Ministral 8B65.048.375.371.965.5

Code and Math

| Model | H
Model Specifications
Context Length131072
Quality Index0.45
LicenseCustom
Last UpdatedOctober 2024
Input TypeText
Output TypeText
PublisherMistral AI
Languages5 Languages