qwen-qwen3-embedding-0.6b
qwen-qwen3-embedding-0.6b
Version: 2
Hugging FaceLast updated December 2025
Qwen/Qwen3-Embedding-0.6B powered by Text Embeddings Inference

OpenAI Embeddings API

Send Request

You can use cURL or any REST Client to send a request to the Azure ML endpoint with your Azure ML token.
curl <AZUREML_ENDPOINT_URL> \\
    -X POST \\
    -H "Authorization: Bearer <AZUREML_TOKEN>" \\
    -H "Content-Type: application/json" \\
    -d '{"model":"Qwen/Qwen3-Embedding-0.6B","input":"The food was delicious and the waiter..."}'

Supported Parameters

  • input (string): Text to create the embeddings for.
  • model (string): ID of the model to use.
  • dimensions (string, optional): The number of dimensions the resulting output embeddings should have.
  • encoding_format (string, optional): The format to return the embeddings in. Can be either float or base64.
For more information, please check the OpenAI Embeddings API Reference .

Text Embeddings Inference API

Generate Embeddings

Additionally, Text Embeddings Inference (TEI) provides an endpoint to generate embeddings but using its own API Specification under the endpoint /embed.

Send Request

curl <AZUREML_ENDPOINT_URL>/embed \\
    -X POST \\
    -H "Authorization: Bearer <AZUREML_TOKEN>" \\
    -H "Content-Type: application/json" \\
    -d '{"inputs":"What is Deep Learning?}'

Supported Parameters

  • inputs (string): The input sentence to create the embeddings for.
  • dimensions (int, optional): Number of dimensions that the output embeddings should have. If not set, the original shape of the representation will be returned instead. Defaults to null.
  • normalize (bool, optional): Wether to normalize returned vectors to be between -1 and 1. Defaults to true.
  • prompt_name (string, optional): The name of the prompt that should be used by for encoding. If not set, no prompt will be applied. Must be a key in the sentence-transformers configuration prompts dictionary, which is either set in the constructor or loaded from the model configuration. For example if prompt_name is "query" and the prompts is {"query": "query: ", ...}, then the sentence "What is the capital of France?" will be encoded as "query: What is the capital of France?" because the sentence is appended to the prompt. Defaults to null.
  • truncate (bool, optional): Whether to truncate the inputs that are longer than the maximum sequence length supported by the model or not. Defaults to False.
  • truncation_direction ('Left' or 'Right', optional): Can either be "Left" or "Right". Truncating to the "Right" means that tokens are removed from the end of the sequence until the maximum supported size is matched, whilst truncating to the "Left" means from the beginning of the sequence. Defaults to "Right".
For more information, please check the Text Embeddings Inference OpenAPI Specification for the /embed endpoint .

Sentence Similarity

Additionally, Text Embeddings Inference (TEI) provides an endpoint to compute the embeddings and their similarities under the endpoint /embed.

Send Request

curl <AZUREML_ENDPOINT_URL>/similarity \\
    -X POST \\
    -H "Authorization: Bearer <AZUREML_TOKEN>" \\
    -H "Content-Type: application/json" \\
    -d '{"inputs":{"sentences":["What is deep learning?","How many cats do you have?"],"source_sentence":"Do you like Deep Learning?"]}}'

Supported Parameters

  • inputs (obejct):
    • sentences (array): List of strings which will be compared against the source_sentence.
    • source_sentence (string): String to compare sentences with. It can be a phrase, sentence, or longer passage, depending on the model being used.
  • parameters (object):
    • truncate (bool, optional): Wether to truncate the inputs that are longer than the maximum sequence length supported by the model or not. Defaults to False.
    • truncation_direction ('Left' or 'Right', optional): Can either be "Left" or "Right". Truncating to the "Right" means that tokens are removed from the end of the sequence until the maximum supported size is matched, whilst truncating to the "Left" means from the beginning of the sequence. Defaults to "Right".
    • prompt_name (string, optional): The name of the prompt that should be used by for encoding. If not set, no prompt will be applied. Must be a key in the sentence-transformers configuration prompts dictionary, which is either set in the constructor or loaded from the model configuration. For example if prompt_name is "query" and the prompts is {"query": "query: ", ...}, then the sentence "What is the capital of France?" will be encoded as "query: What is the capital of France?" because the sentence is appended to the prompt. Defaults to null.
For more information, please check the Text Embeddings Inference OpenAPI Specification for /similarity endpoint .
Model Specifications
LicenseApache-2.0
Last UpdatedDecember 2025
ProviderHugging Face