microsoft-deberta-v2-xxlarge-mnli
microsoft-deberta-v2-xxlarge-mnli
Version: 6
HuggingFaceLast updated July 2025

DeBERTa: Decoding-enhanced BERT with Disentangled Attention

DeBERTa improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data. Please check the official repository for more details and updates. This the DeBERTa V2 XXLarge model fine-tuned with MNLI task, 48 layers, 1536 hidden size. Total parameters 1.5B.

Fine-tuning on NLU tasks

We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
ModelSQuAD 1.1SQuAD 2.0MNLI-m/mmSST-2QNLICoLARTEMRPCQQPSTS-B
F1/EMF1/EMAccAccAccMCCAccAcc/F1Acc/F1P/S
BERT-Large90.9/84.181.8/79.086.6/-93.292.360.670.488.0/-91.3/-90.0/-
RoBERTa-Large94.6/88.989.4/86.590.2/-96.493.968.086.690.9/-92.2/-92.4/-
XLNet-Large95.1/89.790.6/87.990.8/-97.094.969.085.990.8/-92.3/-92.5/-
DeBERTa-Large 195.5/90.190.7/88.091.3/91.196.595.369.591.092.6/94.692.3/-92.8/92.5
DeBERTa-XLarge 1-/--/-91.5/91.297.0--93.192.1/94.3-92.9/92.7
DeBERTa-V2-XLarge 195.8/90.891.4/88.991.7/91.697.595.871.193.992.0/94.292.3/89.892.9/92.9
DeBERTa-V2-XXLarge 1,296.1/91.492.2/89.791.7/91.997.296.072.093.593.1/94.992.7/90.393.2/93.1

Notes.

Run with Deepspeed,
pip install datasets
pip install deepspeed

# Download the deepspeed config file
wget https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli/resolve/main/ds_config.json -O ds_config.json

export TASK_NAME=rte
output_dir="ds_results"
num_gpus=8
batch_size=4
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\
  run_glue.py \\
  --model_name_or_path microsoft/deberta-v2-xxlarge-mnli \\
  --task_name $TASK_NAME \\
  --do_train \\
  --do_eval \\
  --max_seq_length 256 \\
  --per_device_train_batch_size ${batch_size} \\
  --learning_rate 3e-6 \\
  --num_train_epochs 3 \\
  --output_dir $output_dir \\
  --overwrite_output_dir \\
  --logging_steps 10 \\
  --logging_dir $output_dir \\
  --deepspeed ds_config.json
You can also run with --sharded_ddp
cd transformers/examples/text-classification/
export TASK_NAME=rte
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py   --model_name_or_path microsoft/deberta-v2-xxlarge-mnli   \\
--task_name $TASK_NAME   --do_train   --do_eval   --max_seq_length 256   --per_device_train_batch_size 4   \\
--learning_rate 3e-6   --num_train_epochs 3   --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16

Citation

If you find DeBERTa useful for your work, please cite the following paper:
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}

microsoft/deberta-v2-xxlarge-mnli powered by Hugging Face Inference Toolkit

Send Request

You can use cURL or any REST Client to send a request to the AzureML endpoint with your AzureML token.
curl <AZUREML_ENDPOINT_URL> \
    -X POST \
    -H "Authorization: Bearer <AZUREML_TOKEN>" \
    -H "Content-Type: application/json" \
    -d '{"inputs":"I like you. I love you"}'

Supported Parameters

  • inputs (string): The text to classify
  • parameters (object):
    • function_to_apply (enum): Possible values: sigmoid, softmax, none.
    • top_k (integer): When specified, limits the output to the top K most probable classes.
Check the full API Specification at the Hugging Face Inference documentation .
Model Specifications
LicenseMit
Last UpdatedJuly 2025
PublisherHuggingFace