qwen-qwen3-vl-235b-a22b-instruct-fp8
Version: 2
Qwen/Qwen3-VL-235B-A22B-Instruct-FP8 powered by vLLM
Chat Completions API
Send Request
You can use cURL or any REST Client to send a request to the Azure ML endpoint with your Azure ML token.curl <AZUREML_ENDPOINT_URL> \\
-X POST \\
-H "Authorization: Bearer <AZUREML_TOKEN>" \\
-H "Content-Type: application/json" \\
-d '{"model":"Qwen/Qwen3-VL-235B-A22B-Instruct-FP8","messages":[{"role":"user","content":"What is Deep Learning?"}]}'
Supported Parameters
The following are the only mandatory parameters to send in the HTTP POST request tov1/chat/completions.
- model (string): Model ID used to generate the response, in this case since only a single model is deployed within the same endpoint you can either set it to Qwen/Qwen3-VL-235B-A22B-Instruct-FP8 or leave it blank instead.
- messages (array): A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, images, and audio.
/openapi.json for the current Azure ML Endpoint.
Example payload
{
"model": "Qwen/Qwen3-VL-235B-A22B-Instruct-FP8",
"messages": [
{"role":"user","content":"What is Deep Learning?"}
],
"max_completion_tokens": 256,
"temperature": 0.6
}
Vision Support
This model supports vision capabilities. You can send images in your chat completion requests:curl <AZUREML_ENDPOINT_URL> \
-X POST \
-H "Authorization: Bearer <AZUREML_TOKEN>" \
-H "Content-Type: application/json" \
-d '{"model":"Qwen/Qwen3-VL-235B-A22B-Instruct-FP8","messages":[{"role":"user","content":[{"type":"text","text":"What do you see in this image?"},{"type":"image_url","image_url":{"url":"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png"}}]}],"temperature":0.7,"top_p":0.95,"max_tokens":128,"stream":false}'
Model Specifications
LicenseApache-2.0
Last UpdatedOctober 2025
ProviderHugging Face