thunlp / TR-BERTLinks
Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"
☆48Updated 3 years ago
Alternatives and similar repositories for TR-BERT
Users that are interested in TR-BERT are comparing it to the libraries listed below
Sorting:
- [NAACL'22] TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning☆94Updated 3 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Pro…☆62Updated 3 months ago
- [ACL‘20] Highway Transformer: A Gated Transformer.☆33Updated 4 years ago
- Code for EMNLP 2020 paper CoDIR☆41Updated 3 years ago
- [NAACL'22-Findings] Dataset for "Retrieval-Augmented Multilingual Keyphrase Generation with Retriever-Generator Iterative Training"☆18Updated 3 years ago
- Code for the paper "BERT Loses Patience: Fast and Robust Inference with Early Exit".☆66Updated 4 years ago
- Source code for paper: Knowledge Inheritance for Pre-trained Language Models☆38Updated 3 years ago
- Code for "Mixed Cross Entropy Loss for Neural Machine Translation"☆20Updated 4 years ago
- [ACL'21 Findings] Why Machine Reading Comprehension Models Learn Shortcuts?☆16Updated 2 years ago
- ☆116Updated 3 years ago
- Code for our EMNLP 2020 Paper "AIN: Fast and Accurate Sequence Labeling with Approximate Inference Network"☆19Updated 3 years ago
- Paradigm shift in natural language processing☆42Updated 3 years ago
- [EACL'21] Non-Autoregressive with Pretrained Language Model☆61Updated 3 years ago
- Code of the COLING22 paper "uChecker: Masked Pretrained Language Models as Unsupervised Chinese Spelling Checkers"☆19Updated 3 years ago
- This repository is the official implementation of our EMNLP 2022 paper ELMER: A Non-Autoregressive Pre-trained Language Model for Efficie…☆26Updated 3 years ago
- ☆22Updated 4 years ago
- ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhi…☆49Updated 4 years ago
- Code for ACL 2021 paper: Accelerating BERT Inference for Sequence Labeling via Early-Exit☆28Updated 3 years ago
- Open-Retrieval Conversational Machine Reading: A new setting & OR-ShARC dataset☆13Updated 3 years ago
- EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering☆68Updated 4 years ago
- ☆45Updated 4 years ago
- Code for the paper "A Theoretical Analysis of the Repetition Problem in Text Generation" in AAAI 2021.☆57Updated 3 years ago
- The sources codes of the DR-BERT model and baselines☆38Updated 4 years ago
- Code for EMNLP 2021 paper: Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting☆17Updated 4 years ago
- ☆15Updated 4 years ago
- ☆67Updated 4 years ago
- ☆16Updated 4 years ago
- Source code for COLING 2022 paper "Automatic Label Sequence Generation for Prompting Sequence-to-sequence Models"☆24Updated 3 years ago
- [NeurIPS 2022] "A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models", Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li…☆21Updated 2 years ago
- codes and pre-trained models of paper "Segatron: Segment-aware Transformer for Language Modeling and Understanding"☆18Updated 3 years ago