facebook-dinov2-base-imagenet1k-1-layer
Version: 3
Vision Transformer (base-sized model) trained using DINOv2
Vision Transformer (ViT) model trained using the DINOv2 method. It was introduced in the paper DINOv2: Learning Robust Visual Features without Supervision by Oquab et al. and first released in this repository . The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion.\n Images are presented to the model as a sequence of fixed-size patches, which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.\n Note that this model does not include any fine-tuned heads.\n By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.For more details on Dinov2, Review the original-paper and the model's github repoThe model takes an image as input and returns a class token and patch tokens, and optionally 4 register tokens. The embedding dimension is:
Training Details
Training Data
The Dinov2 model is pre-trained and fine-tuned on ImageNet 2012, of 1 million consistingimages and 1,000 classes on a resolution of 224x224.License
apache-2.0Inference Samples
Inference type | Python sample (Notebook) | CLI with YAML |
---|---|---|
Real time | image-classification-online-endpoint.ipynb | image-classification-online-endpoint.sh |
Batch | image-classification-batch-endpoint.ipynb | image-classification-batch-endpoint.sh |
Finetuning Samples
Task | Use case | Dataset | Python sample (Notebook) | CLI with YAML |
---|---|---|---|---|
Image Multi-class classification | Image Multi-class classification | fridgeObjects | fridgeobjects-multiclass-classification.ipynb | fridgeobjects-multiclass-classification.sh |
Image Multi-label classification | Image Multi-label classification | multilabel fridgeObjects | fridgeobjects-multilabel-classification.ipynb | fridgeobjects-multilabel-classification.sh |
Evaluation Samples
Task | Use case | Dataset | Python sample (Notebook) |
---|---|---|---|
Image Multi-class classification | Image Multi-class classification | fridgeObjects | image-multiclass-classification.ipynb |
Image Multi-label classification | Image Multi-label classification | multilabel fridgeObjects | image-multilabel-classification.ipynb |
Sample input and output
Sample input
{
"input_data": ["image1", "image2"]
}
Note: "image1" and "image2" string should be in base64 format or publicly accessible urls.
Sample output
[
{
"probs": [0.91, 0.09],
"labels": ["can", "carton"]
},
{
"probs": [0.1, 0.9],
"labels": ["can", "carton"]
}
]
Visualization of inference result for a sample image
/plot_facebook-dinov2-base-imagenet1k-1-layer_elephant_MC.png)
Model Specifications
LicenseApache-2.0
Last UpdatedApril 2025
PublisherMeta