jinaai-jina-embeddings-v2-small-en
Version: 1
jinaai/jina-embeddings-v2-small-en
powered by Text Embeddings Inference (TEI)
Send Request
You can use cURL or any REST Client to send a request to the AzureML endpoint with your AzureML token.curl <AZUREML_ENDPOINT_URL> \
-X POST \
-d '{"inputs":"What is Deep Learning?"}' \
-H "Authorization: Bearer <AZUREML_TOKEN>" \
-H "Content-Type: application/json"
Supported Parameters
You can use different parameters to generate the embeddings. As of today, the following parameters are supported:- normalize: Whether to normalize returned vectors to have length 1. Defaults to False.
- prompt_name: The name of the prompt to use for encoding. Must be a key in the prompts dictionary, which is either set in the constructor or loaded from the model configuration. For example if prompt_name is "query" and the prompts is {"query": "query: ", ...}, then the sentence "What is the capital of France?" will be encoded as "query: What is the capital of France?" because the sentence is appended to the prompt. If prompt is also set, this argument is ignored. Defaults to null.
- truncate: Truncate the inputs that are longer than the maximum supported size. Defaults to False.
- truncation_direction: Can either be "left" or "right", defaults to "right". Truncating to the "right" means that tokens are removed from the end of the sequence until the maximum supported size is matched, whilst truncating to the "left" means from the beginning of the sequence.
Example payload
{
"inputs": "What is Deep Learning?",
"normalize": false,
"prompt_name": null,
"truncate": false,
"truncation_direction": "right"
}
Model Specifications
LicenseApache-2.0
Last UpdatedMarch 2025
PublisherHuggingFace