microsoft-dialogpt-large
Version: 9
HuggingFaceLast updated July 2025

A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)

DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations.
The human evaluation results indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test.
The model is trained on 147M multi-turn dialogue from Reddit discussion thread.
  • Multi-turn generation examples from an interactive environment:
RoleResponse
UserDoes money buy happiness?
BotDepends how much money you spend on it .
UserWhat is the best way to buy happiness ?
BotYou just have to be a millionaire by your early 20s, then you can be happy .
UserThis is so difficult !
BotYou have no idea how hard it is to be a millionaire and happy . There is a reason the rich have a lot of money
Please find the information about preprocessing, training and full details of the DialoGPT in the original DialoGPT repository ArXiv paper: https://arxiv.org/abs/1911.00536

How to use

Now we are ready to try out how the model works as a chatting partner!
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch


tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-large")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-large")

# Let's chat for 5 lines
for step in range(5):
	# encode the new user input, add the eos_token and return a tensor in Pytorch
	new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')

	# append the new user input tokens to the chat history
	bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids

	# generated a response while limiting the total chat history to 1000 tokens, 
	chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)

	# pretty print last ouput tokens from bot
	print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))

microsoft/DialoGPT-large is a pre-trained language model available on the Hugging Face Hub. It's specifically designed for the conversational task in the transformers library. If you want to learn more about the model's architecture, hyperparameters, limitations, and biases, you can find this information on the model's dedicated Model Card on the Hugging Face Hub . Here's an example API request payload that you can use to obtain predictions from the model:
{
  "inputs": "Hey my name is Julien! How are you?"
}
Model Specifications
LicenseMit
Last UpdatedJuly 2025
ProviderHuggingFace