castorini / berxit
☆21Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for berxit
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Pro…☆58Updated last year
- Code for the paper "BERT Loses Patience: Fast and Robust Inference with Early Exit".☆64Updated 3 years ago
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"☆44Updated 2 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆100Updated 4 years ago
- ☆42Updated 4 years ago
- Code for EMNLP 2020 paper CoDIR☆41Updated 2 years ago
- Code for the paper "A Theoretical Analysis of the Repetition Problem in Text Generation" in AAAI 2021.☆51Updated 2 years ago
- ☆47Updated 4 years ago
- ☆95Updated 2 years ago
- Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)☆71Updated 2 years ago
- [EMNLP'21] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.☆75Updated 2 years ago
- Source code for paper: Knowledge Inheritance for Pre-trained Language Models☆38Updated 2 years ago
- Super Tickets in Pre-Trained Language Models: From Model Compression to Improving Generalization (ACL 2021)☆17Updated 3 years ago
- No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models (ICLR 2022)☆29Updated 2 years ago
- Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021)☆24Updated 3 years ago
- The implementation of "Neural Machine Translation without Embeddings", NAACL 2021☆33Updated 3 years ago
- DisCo Transformer for Non-autoregressive MT☆78Updated 2 years ago
- Source code for <Sequence-Level Training for Non-Autoregressive Neural Machine Translation>.☆23Updated 2 years ago
- Code for ACL 2022 paper "Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation"☆31Updated 2 years ago
- Source code for the EMNLP 2020 long paper <Token-level Adaptive Training for Neural Machine Translation>.☆20Updated 2 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆56Updated last year
- DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization (ACL 2022)☆50Updated last year
- ACL22 paper: Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost☆39Updated 11 months ago
- Implementation of ICLR 2020 paper "Revisiting Self-Training for Neural Sequence Generation"☆47Updated 2 years ago
- ☆20Updated 3 years ago
- Staged Training for Transformer Language Models☆30Updated 2 years ago
- Code for ACL 2023 paper titled "Lifting the Curse of Capacity Gap in Distilling Language Models"☆28Updated last year
- EMNLP 2021 - CTC: A Unified Framework for Evaluating Natural Language Generation☆95Updated last year
- ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhi…☆49Updated 3 years ago