cheneydon / efficient-bert
This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation".
☆32Updated last year
Related projects ⓘ
Alternatives and complementary repositories for efficient-bert
- [ACL-IJCNLP 2021] "EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets" by Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, …☆17Updated 2 years ago
- Code for EMNLP 2020 paper CoDIR☆41Updated 2 years ago
- The official implementation of You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Natu…☆48Updated 3 years ago
- ☆16Updated 3 years ago
- The implementation of multi-branch attentive Transformer (MAT).☆33Updated 4 years ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 2 years ago
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"☆44Updated 2 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆72Updated 3 years ago
- Code for the PAPA paper☆27Updated 2 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆45Updated 3 years ago
- A small framework mimics PyTorch using CuPy or NumPy☆27Updated 2 years ago
- A supplementary code for Editable Neural Networks, an ICLR 2020 submission.☆46Updated 4 years ago
- Block Sparse movement pruning☆78Updated 3 years ago
- ☆22Updated 3 years ago
- Implementation of the retriever distillation procedure as outlined in the paper "Distilling Knowledge from Reader to Retriever"☆32Updated 3 years ago
- ☆26Updated 3 years ago
- ☆13Updated 2 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆53Updated 3 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆59Updated 2 years ago
- [EMNLP 2022] Language Model Pre-Training with Sparse Latent Typing☆15Updated last year
- Official implementation for paper "Relational Surrogate Loss Learning", ICLR 2022☆37Updated 2 years ago
- Source code for the EMNLP 2020 long paper <Token-level Adaptive Training for Neural Machine Translation>.☆20Updated 2 years ago
- LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)☆18Updated last year
- Code for ACL 2023 paper titled "Lifting the Curse of Capacity Gap in Distilling Language Models"☆28Updated last year
- ☆21Updated last year
- ☆10Updated 3 years ago
- Role-Wise Data Augmentation for Knowledge Distillation☆18Updated 2 years ago
- Staged Training for Transformer Language Models☆30Updated 2 years ago
- Code for the paper "Query-Key Normalization for Transformers"☆35Updated 3 years ago