microsoft-dialogpt-small
Version: 12
HuggingFaceLast updated July 2025

A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)

DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations.
The human evaluation results indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test.
The model is trained on 147M multi-turn dialogue from Reddit discussion thread.
  • Multi-turn generation examples from an interactive environment:
RoleResponse
UserDoes money buy happiness?
BotDepends how much money you spend on it .
UserWhat is the best way to buy happiness ?
BotYou just have to be a millionaire by your early 20s, then you can be happy .
UserThis is so difficult !
BotYou have no idea how hard it is to be a millionaire and happy . There is a reason the rich have a lot of money
Please find the information about preprocessing, training and full details of the DialoGPT in the original DialoGPT repository ArXiv paper: https://arxiv.org/abs/1911.00536

How to use

Now we are ready to try out how the model works as a chatting partner!
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch


tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-small")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-small")

# Let's chat for 5 lines
for step in range(5):
	# encode the new user input, add the eos_token and return a tensor in Pytorch
	new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')

	# append the new user input tokens to the chat history
	bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids

	# generated a response while limiting the total chat history to 1000 tokens, 
	chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)

	# pretty print last ouput tokens from bot
	print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))

Open LLM Leaderboard Evaluation Results

Detailed results can be found here
MetricValue
Avg.25.02
ARC (25-shot)25.77
HellaSwag (10-shot)25.79
MMLU (5-shot)25.81
TruthfulQA (0-shot)47.49
Winogrande (5-shot)50.28
GSM8K (5-shot)0.0
DROP (3-shot)0.0

microsoft/DialoGPT-small powered by Text Generation Inference (TGI)

Send Request

You can use cURL or any REST Client to send a request to the AzureML endpoint with your AzureML token.
curl <AZUREML_ENDPOINT_URL> \
    -X POST \
    -d '{"inputs":"What is Deep Learning?"}' \
    -H "Authorization: Bearer <AZUREML_TOKEN>" \
    -H "Content-Type: application/json"

Supported Parameters

  • inputs (string): Input prompt.
  • parameters (object):
    • best_of (integer): Generate best_of sequences and return the one if the highest token logprobs.
    • decoder_input_details (boolean): Whether to return decoder input token logprobs and ids.
    • details (boolean): Whether to return generation details.
    • do_sample (boolean): Activate logits sampling.
    • frequency_penalty (float): The parameter for frequency penalty. 1.0 means no penalty Penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
    • grammar (object): One of the following
      • #1 (object):
        • type (enum): Possible values: json.
        • value (string): A string that represents a JSON Schema. JSON Schema is a declarative language that allows to annotate JSON documents with types and descriptions.
      • #2 (object):
        • type (enum): Possible values: regex.
        • value (string): The regular expression.
      • #3 (object):
        • type (enum): Possible values: json_schema.
        • value (object):
          • name (string): Optional name identifier for the schema
          • schema (object): The actual JSON schema definition
    • max_new_tokens (integer): Maximum number of tokens to generate.
    • repetition_penalty (float): The parameter for repetition penalty. 1.0 means no penalty. See this paper for more details.
    • return_full_text (boolean): Whether to prepend the prompt to the generated text
    • seed (integer): Random sampling seed.
    • stop (string[]): Stop generating tokens if a member of stop is generated.
    • temperature (float): The value used to module the logits distribution.
    • top_k (integer): The number of highest probability vocabulary tokens to keep for top-k-filtering.
    • top_n_tokens (integer): The number of highest probability vocabulary tokens to keep for top-n-filtering.
    • top_p (float): Top-p value for nucleus sampling.
    • truncate (integer): Truncate inputs tokens to the given size.
    • typical_p (float): Typical Decoding mass See Typical Decoding for Natural Language Generation for more information.
    • watermark (boolean): Watermarking with A Watermark for Large Language Models.
    • stream (boolean): Whether to stream the output tokens or not. Defaults to false.

Example payload

{
  "inputs": "What is Deep Learning?",
  "parameters": {
    "do_sample": true,
    "top_p": 0.95,
    "temperature": 0.2,
    "top_k": 50,
    "max_new_tokens": 256,
    "repetition_penalty": 1.03,
    "stop": ["\nUser:", "<|endoftext|>", "</s>"]
  }
}

OpenAI Chat Completion API compatibility

Additionally, Text Generation Inference (TGI) offers an OpenAI Chat Completion API compatible layer under the endpoint /v1/chat/completions,
check the full specification in the OpenAI Chat Completion Create Documentation .
Model Specifications
LicenseMit
Last UpdatedJuly 2025
ProviderHuggingFace