JetRunner / PABEELinks
Code for the paper "BERT Loses Patience: Fast and Robust Inference with Early Exit".
☆65Updated 4 years ago
Alternatives and similar repositories for PABEE
Users that are interested in PABEE are comparing it to the libraries listed below
Sorting:
- Code for EMNLP 2020 paper CoDIR☆41Updated 2 years ago
- Code for the paper "True Few-Shot Learning in Language Models" (https://arxiv.org/abs/2105.11447)☆145Updated 3 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Pro…☆62Updated 3 months ago
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"☆47Updated 3 years ago
- ☆48Updated 5 years ago
- Code for the paper "A Theoretical Analysis of the Repetition Problem in Text Generation" in AAAI 2021.☆54Updated 2 years ago
- Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)☆73Updated 3 years ago
- [NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240☆168Updated 2 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆172Updated 5 years ago
- EMNLP 2021 - CTC: A Unified Framework for Evaluating Natural Language Generation☆97Updated 2 years ago
- ☆62Updated 3 years ago
- EMNLP BlackBox NLP 2020: Searching for a Search Method: Benchmarking Search Algorithms for Generating NLP Adversarial Examples☆24Updated 4 years ago
- ☆117Updated 3 years ago
- [NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining☆117Updated 2 years ago
- DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference☆157Updated 3 years ago
- This repository contains the code for "How many data points is a prompt worth?"☆48Updated 4 years ago
- Code and datasets for the EMNLP 2020 paper "Calibration of Pre-trained Transformers"☆61Updated 2 years ago
- [NeurIPS 2020] "The Lottery Ticket Hypothesis for Pre-trained BERT Networks", Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Ya…☆141Updated 3 years ago
- Few-shot NLP benchmark for unified, rigorous eval☆91Updated 3 years ago
- Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data☆57Updated 4 years ago
- [EMNLP'21] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.☆78Updated 2 years ago
- ☆66Updated 3 years ago
- ☆15Updated 3 years ago
- codes and pre-trained models of paper "Segatron: Segment-aware Transformer for Language Modeling and Understanding"☆18Updated 2 years ago
- EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering☆69Updated 3 years ago
- Source code for paper: Knowledge Inheritance for Pre-trained Language Models☆38Updated 3 years ago
- [NAACL'22] TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning☆93Updated 3 years ago
- Implementation of Mixout with PyTorch☆75Updated 2 years ago
- Uncertainty-aware Self-training☆121Updated last year
- [ACL‘20] Highway Transformer: A Gated Transformer.☆33Updated 3 years ago