microsoft-tapex-large
Version: 6
HuggingFaceLast updated July 2025

TAPEX (large-sized model)

TAPEX was proposed in TAPEX: Table Pre-training via Learning a Neural SQL Executor by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. The original repo can be found here .

Model description

TAPEX (Table Pre-training via Execution) is a conceptually simple and empirically powerful pre-training approach to empower existing models with table reasoning skills. TAPEX realizes table pre-training by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries. TAPEX is based on the BART architecture, the transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder.

Intended Uses

⚠️ This model checkpoint is ONLY used for fine-tuining on downstream tasks, and you CANNOT use this model for simulating neural SQL execution, i.e., employ TAPEX to execute a SQL query on a given table. The one that can neurally execute SQL queries is at here .
This separation of two models for two kinds of intention is because of a known issue in BART large, and we recommend readers to see this comment for more details.

How to Fine-tuning

Please find the fine-tuning script here .

BibTeX entry and citation info

@inproceedings{
    liu2022tapex,
    title={{TAPEX}: Table Pre-training via Learning a Neural {SQL} Executor},
    author={Qian Liu and Bei Chen and Jiaqi Guo and Morteza Ziyadi and Zeqi Lin and Weizhu Chen and Jian-Guang Lou},
    booktitle={International Conference on Learning Representations},
    year={2022},
    url={https://openreview.net/forum?id=O50443AsCP}
}

microsoft/tapex-large is a pre-trained language model available on the Hugging Face Hub. It's specifically designed for the table-question-answering task in the transformers library. If you want to learn more about the model's architecture, hyperparameters, limitations, and biases, you can find this information on the model's dedicated Model Card on the Hugging Face Hub . Here's an example API request payload that you can use to obtain predictions from the model:
{
  "inputs": "How many stars does the transformers repository have?"
}
Model Specifications
LicenseMit
Last UpdatedJuly 2025
ProviderHuggingFace