microsoft-phi-2
microsoft-phi-2
Version: 3
HuggingFaceLast updated May 2025
microsoft/phi-2 powered by Text Generation Inference (TGI)

Send Request

You can use cURL or any REST Client to send a request to the AzureML endpoint with your AzureML token.
curl <AZUREML_ENDPOINT_URL> \
    -X POST \
    -d '{"inputs":"What is Deep Learning?"}' \
    -H "Authorization: Bearer <AZUREML_TOKEN>" \
    -H "Content-Type: application/json"

Supported Parameters

  • inputs (string): Input prompt.
  • parameters (object):
    • best_of (integer): Generate best_of sequences and return the one if the highest token logprobs.
    • decoder_input_details (boolean): Whether to return decoder input token logprobs and ids.
    • details (boolean): Whether to return generation details.
    • do_sample (boolean): Activate logits sampling.
    • frequency_penalty (float): The parameter for frequency penalty. 1.0 means no penalty Penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
    • grammar (object): One of the following
      • #1 (object):
        • type (enum): Possible values: json.
        • value (string): A string that represents a JSON Schema. JSON Schema is a declarative language that allows to annotate JSON documents with types and descriptions.
      • #2 (object):
        • type (enum): Possible values: regex.
        • value (string): The regular expression.
      • #3 (object):
        • type (enum): Possible values: json_schema.
        • value (object):
          • name (string): Optional name identifier for the schema
          • schema (object): The actual JSON schema definition
    • max_new_tokens (integer): Maximum number of tokens to generate.
    • repetition_penalty (float): The parameter for repetition penalty. 1.0 means no penalty. See this paper for more details.
    • return_full_text (boolean): Whether to prepend the prompt to the generated text
    • seed (integer): Random sampling seed.
    • stop (string[]): Stop generating tokens if a member of stop is generated.
    • temperature (float): The value used to module the logits distribution.
    • top_k (integer): The number of highest probability vocabulary tokens to keep for top-k-filtering.
    • top_n_tokens (integer): The number of highest probability vocabulary tokens to keep for top-n-filtering.
    • top_p (float): Top-p value for nucleus sampling.
    • truncate (integer): Truncate inputs tokens to the given size.
    • typical_p (float): Typical Decoding mass See Typical Decoding for Natural Language Generation for more information.
    • watermark (boolean): Watermarking with A Watermark for Large Language Models.
    • stream (boolean): Whether to stream the output tokens or not. Defaults to false.

Example payload

{
  "inputs": "What is Deep Learning?",
  "parameters": {
    "do_sample": true,
    "top_p": 0.95,
    "temperature": 0.2,
    "top_k": 50,
    "max_new_tokens": 256,
    "repetition_penalty": 1.03,
    "stop": ["\nUser:", "<|endoftext|>", "</s>"]
  }
}

OpenAI Chat Completion API compatibility

Additionally, Text Generation Inference (TGI) offers an OpenAI Chat Completion API compatible layer under the endpoint /v1/chat/completions,
check the full specification in the OpenAI Chat Completion Create Documentation .
Model Specifications
LicenseMit
Last UpdatedMay 2025
PublisherHuggingFace