CodeLlama-13b-Instruct-hf
Version: 12
Key capabilities
About this model
Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This model is designed for general code synthesis and understanding.Key model capabilities
Code Llama: base models designed for general code synthesis and understandingCode Llama - Python: designed specifically for Python
Code Llama - Instruct: for instruction following and safer deployment The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.
Use cases
See Responsible AI for additional considerations for responsible use.Key use cases
Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.Out of scope use cases
Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.Pricing
Pricing is based on a number of factors, including deployment type and tokens used. See pricing details here.Technical specs
Code Llama is an auto-regressive language model that uses an optimized transformer architecture. Code Llama comes in three model sizes, and three variants: Code Llama: base models designed for general code synthesis and understandingCode Llama - Python: designed specifically for Python
Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B and 34B parameters.
Training cut-off date
The provider has not supplied this information.Training time
Code Llama and its variants have been trained between January 2023 and July 2023. In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta's sustainability program.Input formats
Models input text only.Output formats
Models generate text only.Supported languages
Use in languages other than English.Sample JSON response
[
{
"0": "def fibonacci(n = 0):\n a = 0\n b = 1\n if n <= 1:\n return n\n else:\n c = 0\n for i in range(n-1):\n a = b\n b = c\n c = a + b\n return c\n\n\nfibonacci(10)\n"
}
]
Model architecture
Code Llama is an auto-regressive language model that uses an optimized transformer architecture.Long context
The provider has not supplied this information.Optimizing model performance
The provider has not supplied this information.Additional assets
The provider has not supplied this information.Training disclosure
Training, testing and validation
All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights. We used custom training libraries. The training and fine-tuning of the released models have been performed Meta's Research Super Cluster.Distribution
Distribution channels
The provider has not supplied this information.More information
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/ This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback. A custom commercial license is available at: https://ai.meta.com/resources/models-and-libraries/llama-downloads/ See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.Responsible AI considerations
Safety techniques
The provider has not supplied this information.Safety evaluations
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.Known limitations
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/Acceptable use
Acceptable use policy
Out-of-scope Uses Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.Quality and performance evaluations
Source: Meta See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.Benchmarking methodology
Source: Meta The provider has not supplied this information.Public data summary
Source: Meta The provider has not supplied this information.Model Specifications
LicenseLlama2
Last UpdatedFebruary 2026
ProviderMeta
Languages1 Language