CodeLlama-34b-Instruct-hf
Version: 12
Key capabilities
About this model
This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback.Key model capabilities
The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.Use cases
See Responsible AI for additional considerations for responsible use.Key use cases
Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications.Out of scope use cases
Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.Pricing
Pricing is based on a number of factors, including deployment type and tokens used. See pricing details here.Technical specs
Code Llama is an auto-regressive language model that uses an optimized transformer architecture.Training cut-off date
Code Llama and its variants have been trained between January 2023 and July 2023.Training time
In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). We used custom training libraries. The training and fine-tuning of the released models have been performed Meta's Research Super Cluster.Input formats
Models input text only.Output formats
Models generate text only.Supported languages
The provider has not supplied this information.Sample JSON response
[
{
"0": "def sort_integers(int_list):\n \"\"\"This function takes an a list of integers and sorts it in ascending order using build in `sorted` function which returns\n a new sorted list.\n\n Args:\n int_list (list): list of integers to be sorted.\n Returns:\n \"list\": sorted list\n\n \"\"\"\n return sorted(int_list)\nlst = [3, 7, -4, 2, 1]\nlst1 = [15, 2, 1, 8, 4, 3]\nprint(\"unsorted list\", lst)\nprint(\"sorted list\", sort_integers(lst))\nprint(\"sorted list\", sort_integers(lst1))\n"
}
]
Model architecture
Code Llama is an auto-regressive language model that uses an optimized transformer architecture.Long context
The provider has not supplied this information.Optimizing model performance
The provider has not supplied this information.Additional assets
The provider has not supplied this information.Training disclosure
Training, testing and validation
All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights.Distribution
Distribution channels
The provider has not supplied this information.More information
| Task | Use case | Dataset | Python sample (Notebook) | CLI with YAML |
|---|---|---|---|---|
| Text generation | Text generation | cnn_dailymail | evaluate-model-text-generation.ipynb | evaluate-model-text-generation.yml |
| Inference type | Python sample (Notebook) | CLI with YAML |
|---|---|---|
| Real time | text-generation-online-endpoint.ipynb | text-generation-online-endpoint.sh |
| Batch | text-generation-batch-endpoint.ipynb | coming soon |
Responsible AI considerations
Safety techniques
The provider has not supplied this information.Safety evaluations
See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.Known limitations
Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model.Acceptable use
Acceptable use policy
Out-of-scope Uses Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/Quality and performance evaluations
Source: Meta See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper.Benchmarking methodology
Source: Meta The provider has not supplied this information.Public data summary
Source: Meta The provider has not supplied this information.Model Specifications
LicenseLlama2
Last UpdatedJanuary 2026
ProviderMeta
Languages1 Language