castorini / DeeBERT
DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference
☆151Updated 2 years ago
Related projects: ⓘ
- Code for the paper "Are Sixteen Heads Really Better than One?"☆165Updated 4 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Pro…☆58Updated last year
- Code for the paper "BERT Loses Patience: Fast and Robust Inference with Early Exit".☆63Updated 3 years ago
- A curated list of Early Exiting papers, benchmarks, and misc.☆91Updated 10 months ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆188Updated last year
- Code associated with the paper **SkipBERT: Efficient Inference with Shallow Layer Skipping**, at ACL 2022☆15Updated 2 years ago
- pytorch implementation for Patient Knowledge Distillation for BERT Model Compression☆198Updated 5 years ago
- Must-read papers on improving efficiency for pre-trained language models.☆100Updated last year
- For the code release of our arXiv paper "Revisiting Few-sample BERT Fine-tuning" (https://arxiv.org/abs/2006.05987).☆184Updated last year
- A pre-trained model with multi-exit transformer architecture.☆54Updated last year
- [NeurIPS 2020] "The Lottery Ticket Hypothesis for Pre-trained BERT Networks", Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Ya…☆137Updated 2 years ago
- [KDD'22] Learned Token Pruning for Transformers☆91Updated last year
- ☆17Updated 4 years ago
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆100Updated 3 years ago
- Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.☆114Updated 3 years ago
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆96Updated 2 years ago
- Block Sparse movement pruning☆77Updated 3 years ago
- ☆20Updated 3 years ago
- ☆21Updated 3 years ago
- ⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).☆310Updated last year
- This is a repository with the code for the ACL 2019 paper "Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, t…☆296Updated 3 years ago
- ICLR2019, Multilingual Neural Machine Translation with Knowledge Distillation☆70Updated 3 years ago
- [NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers☆160Updated last year
- Source code for "Efficient Training of BERT by Progressively Stacking"☆111Updated 5 years ago
- ☆47Updated 4 years ago
- Pretrain CPM-1☆50Updated 3 years ago
- Open Source Neural Machine Translation in PyTorch☆17Updated 5 years ago
- [ICML'21 Oral] I-BERT: Integer-only BERT Quantization☆220Updated last year
- Code for ACL2022 publication Transkimmer: Transformer Learns to Layer-wise Skim☆21Updated 2 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆91Updated 3 years ago