microsoft-codebert-base-mlm
Version: 8
CodeBERT-base-mlm
Pretrained weights for CodeBERT: A Pre-Trained Model for Programming and Natural Languages .Training Data
The model is trained on the code corpus of CodeSearchNetTraining Objective
This model is initialized with Roberta-base and trained with a simple MLM (Masked Language Model) objective.Usage
from transformers import RobertaTokenizer, RobertaForMaskedLM, pipeline
model = RobertaForMaskedLM.from_pretrained('microsoft/codebert-base-mlm')
tokenizer = RobertaTokenizer.from_pretrained('microsoft/codebert-base-mlm')
code_example = "if (x is not None) <mask> (x>1)"
fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
outputs = fill_mask(code_example)
print(outputs)
{'sequence': '<s> if (x is not None) and (x>1)</s>', 'score': 0.6049249172210693, 'token': 8}
{'sequence': '<s> if (x is not None) or (x>1)</s>', 'score': 0.30680200457572937, 'token': 50}
{'sequence': '<s> if (x is not None) if (x>1)</s>', 'score': 0.02133703976869583, 'token': 114}
{'sequence': '<s> if (x is not None) then (x>1)</s>', 'score': 0.018607674166560173, 'token': 172}
{'sequence': '<s> if (x is not None) AND (x>1)</s>', 'score': 0.007619690150022507, 'token': 4248}
Reference
- Bimodal CodeBERT trained with MLM+RTD objective (suitable for code search and document generation)
- 🤗 Hugging Face's CodeBERTa (small size, 6 layers)
Citation
@misc{feng2020codebert,
title={CodeBERT: A Pre-Trained Model for Programming and Natural Languages},
author={Zhangyin Feng and Daya Guo and Duyu Tang and Nan Duan and Xiaocheng Feng and Ming Gong and Linjun Shou and Bing Qin and Ting Liu and Daxin Jiang and Ming Zhou},
year={2020},
eprint={2002.08155},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
microsoft/codebert-base-mlm powered by Hugging Face Inference Toolkit
Send Request
You can use cURL or any REST Client to send a request to the AzureML endpoint with your AzureML token.curl <AZUREML_ENDPOINT_URL> \
-X POST \
-H "Authorization: Bearer <AZUREML_TOKEN>" \
-H "Content-Type: application/json" \
-d '{"inputs":"The answer to the universe is undefined."}'
Supported Parameters
- inputs (string): The text with masked tokens
- parameters (object):
- top_k (integer): When passed, overrides the number of predictions to return.
- targets (string[]): When passed, the model will limit the scores to the passed targets instead of looking up in the whole vocabulary. If the provided targets are not in the model vocab, they will be tokenized and the first resulting token will be used (with a warning, and that might be slower).
Model Specifications
LicenseUnknown
Last UpdatedJuly 2025
ProviderHuggingFace