somosnlp-hackathon-2022-paraphrase-spanish-distilroberta
Version: 1
somosnlp-hackathon-2022/paraphrase-spanish-distilroberta
powered by Text Embeddings Inference (TEI)
Send Request
You can use cURL or any REST Client to send a request to the AzureML endpoint with your AzureML token.curl <AZUREML_ENDPOINT_URL> \
-X POST \
-d '{"inputs":"What is Deep Learning?"}' \
-H "Authorization: Bearer <AZUREML_TOKEN>" \
-H "Content-Type: application/json"
Supported Parameters
- inputs (string): The input sentence to create the embeddings for.
- parameters (object):
- normalize: Whether to normalize returned vectors to have length 1. Defaults to False.
- prompt_name: The name of the prompt to use for encoding. Must be a key in the prompts dictionary, which is either set in the constructor or loaded from the model configuration. For example if prompt_name is "query" and the prompts is {"query": "query: ", ...}, then the sentence "What is the capital of France?" will be encoded as "query: What is the capital of France?" because the sentence is appended to the prompt. If prompt is also set, this argument is ignored. Defaults to null.
- truncate: Truncate the inputs that are longer than the maximum supported size. Defaults to False.
- truncation_direction: Can either be "left" or "right", defaults to "right". Truncating to the "right" means that tokens are removed from the end of the sequence until the maximum supported size is matched, whilst truncating to the "left" means from the beginning of the sequence.
Example payload
{
"inputs": "What is Deep Learning?",
"normalize": false,
"prompt_name": null,
"truncate": false,
"truncation_direction": "right"
}
OpenAI Embeddings API compatibility
Additionally, Text Embeddings Inference (TEI) offers an OpenAI Chat Completion API compatible layer under the endpoint/v1/embeddings
,check the full specification in the OpenAI Embeddings Create Documentation .
Model Specifications
LicenseUnknown
Last UpdatedMay 2025
PublisherHuggingFace