CodeLlama-13b-hf
Version: 12
Key capabilities
About this model
Code Llama comes in three model sizes, and three variants:- Code Llama: base models designed for general code synthesis and understanding
- Code Llama - Python: designed specifically for Python
- Code Llama - Instruct: for instruction following and safer deployment
Key model capabilities
Use cases
See Responsible AI for additional considerations for responsible use.Key use cases
Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.Out of scope use cases
Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.Pricing
Pricing is based on a number of factors, including deployment type and tokens used. See pricing details here.Technical specs
Code Llama is an auto-regressive language model that uses an optimized transformer architecture.Training cut-off date
The provider has not supplied this information.Training time
Code Llama and its variants have been trained between January 2023 and July 2023.Input formats
Models input text only.Output formats
Models generate text only.Supported languages
The provider has not supplied this information.Sample JSON response
[
{
"0": "def fibonacci(n):\n if n == 0:\n return 0\n elif n == 1:\n return 1\n else:\n return fibonacci(n-1) + fibonacci(n-2)\n\n\ndef main():\n n = int(input(\"Enter a number: \"))\n print(fibonacci(n))\n\n\nif __name__ == \"__main__\":\n main()"
}
]
Model architecture
Code Llama is an auto-regressive language model that uses an optimized transformer architecture.Long context
The provider has not supplied this information.Optimizing model performance
The provider has not supplied this information.Additional assets
More information can be found in the paper "Code Llama: Open Foundation Models for Code " or its arXiv page .Training disclosure
Training, testing and validation
All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the research paper for details).Distribution
Distribution channels
The provider has not supplied this information.More information
A custom commercial license is available at: https://ai.meta.com/resources/models-and-libraries/llama-downloads/Responsible AI considerations
Safety techniques
The provider has not supplied this information.Safety evaluations
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.Known limitations
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.Acceptable use
Acceptable use policy
Out-of-Scope Uses Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. Please see the Responsible Use Guide available available at https://ai.meta.com/llama/responsible-user-guide .Quality and performance evaluations
Source: Meta See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.Benchmarking methodology
Source: Meta Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios.Public data summary
Source: Meta The provider has not supplied this information.Model Specifications
LicenseLlama2
Last UpdatedFebruary 2026
ProviderMeta
Languages1 Language