facebook-sam-vit-large
Version: 6
The Segment Anything Model (SAM) produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. It has been trained on a dataset of 11 million images and 1.1 billion masks, and has strong zero-shot performance on a variety of segmentation tasks.
The SAM model is made up of 3 modules:
/plot_facebook-sam-vit-large.png)
- The
VisionEncoder
: a VIT based image encoder. It computes the image embeddings using attention on patches of the image. Relative Positional Embedding is used. - The
PromptEncoder
: generates embeddings for points and bounding boxes - The
MaskDecoder
: a two-ways transformer which performs cross attention between the image embedding and the point embeddings (->) and between the point embeddings and the image embeddings. The outputs are fed - The
Neck
: predicts the output masks based on the contextualized masks produced by theMaskDecoder
.
Training Details
Training Data
See here for an overview of the datastet.License
apache-2.0Inference Samples
Inference type | Python sample (Notebook) | CLI with YAML |
---|---|---|
Real time | mask-generation-online-endpoint.ipynb | mask-generation-online-endpoint.sh |
Batch | mask-generation-batch-endpoint.ipynb | mask-generation-batch-endpoint.sh |
Sample input and output
Sample input
{
"input_data": {
"columns": [
"image",
"input_points",
"input_boxes",
"input_labels",
"multimask_output"
],
"index": [0],
"data": [["image1", "", "[[650, 900, 1000, 1250]]", "", false]]
},
"params": {}
}
Note: "image1" string should be in base64 format or publicly accessible urls.
Sample output
[
{
"predictions": [
0: {
"mask_per_prediction": [
0: {
"encoded_binary_mask": "encoded_binary_mask1",
"iou_score": 0.85
}
]
}
]
},
]
Note: "encoded_binary_mask1" string is in base64 format.
Visualization of inference result for a sample image
/plot_facebook-sam-vit-large.png)
Model Specifications
LicenseApache-2.0
Last UpdatedApril 2025
PublisherMeta