Meta
MetaOpen-source models like Llama 2, built for versatile language tasks and research applications.
Total Models: 48
Llama-4-Scout-17B-16E-Instruct
Llama-4-Scout-17B-16E-Instruct

Llama 4 Scout 17B 16E Instruct is great at multi-document summarization, parsing extensive user activity for personalized tasks, and reasoning over vast codebases.

chat-completion
Llama-4-Maverick-17B-128E-Instruct-FP8
Llama-4-Maverick-17B-128E-Instruct-FP8

Llama 4 Maverick 17B 128E Instruct FP8 is great at precise image understanding and creative writing, offering high quality at a lower price compared to Llama 3.3 70B

chat-completion
Llama-4-Scout-17B-16E
Llama-4-Scout-17B-16E

Llama 4 Scout 17B 16E is great at multi-document summarization, parsing extensive user activity for personalized tasks, and reasoning over vast codebases.

chat-completion
Llama-3.3-70B-Instruct
Llama-3.3-70B-Instruct

Llama 3.3 70B Instruct offers enhanced reasoning, math, and instruction following with performance comparable to Llama 3.1 405B.

chat-completion
Meta-Llama-3.1-405B-Instruct
Meta-Llama-3.1-405B-Instruct

The Llama 3.1 instruction tuned text only models are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.

chat-completion
Llama-3.2-90B-Vision-Instruct
Llama-3.2-90B-Vision-Instruct

Advanced image reasoning capabilities for visual understanding agentic apps.

chat-completion
CodeLlama-7b-Python-hf
CodeLlama-7b-Python-hf

Code Llama Code Llama is a collection of pretrained and finetuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 34B version in the Hugging Face Transformers format. This model is designed for general code synthesis and und

text-generation
Llama-2-7b-chat
Llama-2-7b-chat

Meta has developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and finetuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our finetuned LLMs, called Llama2Chat, are optimized for dialogue use cases. Llam

chat-completion
Llama-2-70b-chat
Llama-2-70b-chat

Meta has developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and finetuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our finetuned LLMs, called Llama2Chat, are optimized for dialogue use cases. Llam

chat-completion
Llama-2-7b
Llama-2-7b

Meta has developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and finetuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our finetuned LLMs, called Llama2Chat, are optimized for dialogue use cases. Llam

text-generation
facebook-sam-vit-huge
facebook-sam-vit-huge

The Segment Anything Model (SAM) produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. It has been trained on a dataset of 11 million images and 1.1 billi

image-segmentation
CodeLlama-7b-Instruct-hf
CodeLlama-7b-Instruct-hf

Model Details Note: Use of this model is governed by the Meta license. Click on View License above. Code Llama family of large language models (LLMs). Code Llama is a collection of pretrained and finetuned generative text models ranging in scale from 7 billion to 34 billion parameters. This

text-generation
CodeLlama-7b-hf
CodeLlama-7b-hf

Code Llama Code Llama is a collection of pretrained and finetuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 7B version in the Hugging Face Transformers format. This model is designed for general code synthesis and unde

text-generation
Llama-2-70b
Llama-2-70b

Meta has developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and finetuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our finetuned LLMs, called Llama2Chat, are optimized for dialogue use cases. Llam

text-generation
Prompt-Guard-86M
Prompt-Guard-86M

Model Information LLMpowered applications are susceptible to prompt attacks, which are prompts intentionally designed to subvert the developer’s intended behavior of the LLM. Categories of prompt attacks include prompt injection and jailbreaking: Prompt Injections are inputs that exploit t

text-classification
Meta-Llama-3.1-8B-Instruct
Meta-Llama-3.1-8B-Instruct

The Llama 3.1 instruction tuned text only models are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.

chat-completion
Llama-3.2-11B-Vision-Instruct
Llama-3.2-11B-Vision-Instruct

Excels in image reasoning capabilities on high-res images for visual understanding apps.

chat-completion
facebook-sam-vit-large
facebook-sam-vit-large

The Segment Anything Model (SAM) produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. It has been trained on a dataset of 11 million images and 1.1 billi

image-segmentation
facebook-deit-base-patch16-224
facebook-deit-base-patch16-224

DeiT (Dataefficient image Transformers) is an image transformer that do not require very large amounts of data for training. This is achieved through a novel distillation procedure using teacherstudent strategy, which results in high throughput and accuracy. DeiT is pretrained and finetuned on I

image-classification
Meta-Llama-3.1-70B
Meta-Llama-3.1-70B

Model Information The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multil

text-generation
Llama-3.2-1B-Instruct
Llama-3.2-1B-Instruct

Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instructiontuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instructiontuned text only models are optimized for multilingual dialogue use c

chat-completion
Meta-Llama-3.1-8B
Meta-Llama-3.1-8B

Model Information The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multil

text-generation
Meta-Llama-3-70B
Meta-Llama-3-70B

Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the avai

text-generation
Meta-Llama-3-8B-Instruct
Meta-Llama-3-8B-Instruct

A versatile 8-billion parameter model optimized for dialogue and text generation tasks.

chat-completion
Llama-3.2-3B-Instruct
Llama-3.2-3B-Instruct

Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instructiontuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instructiontuned text only models are optimized for multilingual dialogue use c

chat-completion
CodeLlama-70b-Instruct-hf
CodeLlama-70b-Instruct-hf

Code Llama is a collection of pretrained and finetuned generative text models ranging in scale from 7 billion to 70 billion parameters. CodeLlama70binstruct model is designed for general code synthesis and understanding. Limitations and Biases Code Llama and its variants are a new technology t

text-generation
Llama-Guard-3-1B
Llama-Guard-3-1B

Llama Guard 31B Model Card Model Details Built with Llama Llama Guard 31B is a finetuned Llama3.21B pretrained model for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs (prompt classification) and in LLM responses (r

chat-completion
Llama-Guard-3-8B
Llama-Guard-3-8B

Model Details Llama Guard 3 is a Llama3.18B pretrained model, finetuned for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM – it generate

chat-completion
Meta-Llama-3.1-70B-Instruct
Meta-Llama-3.1-70B-Instruct

The Llama 3.1 instruction tuned text only models are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.

chat-completion
CodeLlama-13b-hf
CodeLlama-13b-hf

Code Llama Code Llama is a collection of pretrained and finetuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 13B version in the Hugging Face Transformers format. This model is designed for general code synthesis and un

text-generation
CodeLlama-34b-Python-hf
CodeLlama-34b-Python-hf

Code Llama Code Llama is a collection of pretrained and finetuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 34B version in the Hugging Face Transformers format. This model is designed for general code synthesis and und

text-generation
facebook-dinov2-base-imagenet1k-1-layer
facebook-dinov2-base-imagenet1k-1-layer

Vision Transformer (basesized model) trained using DINOv2 Vision Transformer (ViT) model trained using the DINOv2 method. It was introduced in the paper <a href="https://arxiv.org/abs/2304.07193"DINOv2: Learning Robust Visual Features without Supervision by Oquab et al.</a and first released in

image-classification
CodeLlama-13b-Python-hf
CodeLlama-13b-Python-hf

Code Llama Code Llama is a collection of pretrained and finetuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 34B version in the Hugging Face Transformers format. This model is designed for general code synthesis and und

text-generation
Meta-Llama-3-70B-Instruct
Meta-Llama-3-70B-Instruct

A powerful 70-billion parameter model excelling in reasoning, coding, and broad language applications.

chat-completion
Llama-Guard-3-11B-Vision
Llama-Guard-3-11B-Vision

Llama Guard 311Bvision Model Card Model Details Built with Llama Llama Guard 3 Vision is a Llama3.211B pretrained model, finetuned for content safety classification. Similar to previous versions [13], it can be used to safeguard content for both LLM inputs (prompt classification) a

chat-completion
CodeLlama-70b-hf
CodeLlama-70b-hf

Code Llama is a collection of pretrained and finetuned generative text models ranging in scale from 7 billion to 70 billion parameters. CodeLlama70b model is designed for general code synthesis and understanding. Ethical Considerations and Limitations Code Llama and its variants are a new techn

text-generation
CodeLlama-70b-Python-hf
CodeLlama-70b-Python-hf

Code Llama is a collection of pretrained and finetuned generative text models ranging in scale from 7 billion to 70 billion parameters. CodeLlama70bPython model is designed for general code synthesis and understanding. Limitations and Biases Code Llama and its variants are a new technology tha

text-generation
Llama-3.2-3B
Llama-3.2-3B

Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instructiontuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instructiontuned text only models are optimized for multilingual dialogue use c

text-generation
Facebook-DinoV2-Image-Embeddings-ViT-Giant
Facebook-DinoV2-Image-Embeddings-ViT-Giant

The Vision Transformer (ViT) is a transformer encoder model (BERTlike) pretrained on a large collection of images in a selfsupervised fashion with the DinoV2 method. Images are presented to the model as a sequence of fixedsize patches, which are linearly embedded. One also adds a [CLS] token to

embeddings
Facebook-DinoV2-Image-Embeddings-ViT-Base
Facebook-DinoV2-Image-Embeddings-ViT-Base

The Vision Transformer (ViT) is a transformer encoder model (BERTlike) pretrained on a large collection of images in a selfsupervised fashion with the DinoV2 method. Images are presented to the model as a sequence of fixedsize patches, which are linearly embedded. One also adds a [CLS] token to

embeddings
Llama-2-13b-chat
Llama-2-13b-chat

Meta has developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and finetuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our finetuned LLMs, called Llama2Chat, are optimized for dialogue use cases. Llam

chat-completion
Llama-3.2-1B
Llama-3.2-1B

Model Information The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instructiontuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instructiontuned text only models are optimized for multilingual dialogue use c

text-generation
CodeLlama-13b-Instruct-hf
CodeLlama-13b-Instruct-hf

Model Details Note: Use of this model is governed by the Meta license. Click on View License above. Code Llama family of large language models (LLMs). Code Llama is a collection of pretrained and finetuned generative text models ranging in scale from 7 billion to 34 billion parameters. This

text-generation
facebook-sam-vit-base
facebook-sam-vit-base

The Segment Anything Model (SAM) produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. It has been trained on a dataset of 11 million images and 1.1 billi

image-segmentation
Llama-2-13b
Llama-2-13b

Meta has developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and finetuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our finetuned LLMs, called Llama2Chat, are optimized for dialogue use cases. Llam

text-generation
CodeLlama-34b-hf
CodeLlama-34b-hf

Code Llama Code Llama is a collection of pretrained and finetuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 34B version in the Hugging Face Transformers format. This model is designed for general code synthesis and und

text-generation
CodeLlama-34b-Instruct-hf
CodeLlama-34b-Instruct-hf

Model Details Note: Use of this model is governed by the Meta license. Click on View License above. Code Llama family of large language models (LLMs). Code Llama is a collection of pretrained and finetuned generative text models ranging in scale from 7 billion to 34 billion parameters. This

text-generation
Meta-Llama-3-8B
Meta-Llama-3-8B

Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the avai

text-generation