cheneydon / efficient-bertLinks
This repository contains the code for the paper in Findings of EMNLP 2021: "EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation".
☆33Updated 2 years ago
Alternatives and similar repositories for efficient-bert
Users that are interested in efficient-bert are comparing it to the libraries listed below
Sorting:
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 2 years ago
- Code for EMNLP 2020 paper CoDIR☆41Updated 2 years ago
- [ACL-IJCNLP 2021] "EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets" by Xiaohan Chen, Yu Cheng, Shuohang Wang, Zhe Gan, …☆18Updated 3 years ago
- LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)☆18Updated 2 years ago
- Code for the PAPA paper☆27Updated 2 years ago
- ☆16Updated 4 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆61Updated 3 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆73Updated 4 years ago
- Implementation of the retriever distillation procedure as outlined in the paper "Distilling Knowledge from Reader to Retriever"☆32Updated 4 years ago
- The official implementation of You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Natu…☆48Updated 3 years ago
- Block Sparse movement pruning☆80Updated 4 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- The implementation of multi-branch attentive Transformer (MAT).☆33Updated 4 years ago
- ☆13Updated 3 years ago
- Code of "Visualizing and Understanding Object Detecor"☆20Updated 4 years ago
- Code for the paper "Query-Key Normalization for Transformers"☆41Updated 4 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Updated 4 years ago
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"☆47Updated 3 years ago
- Staged Training for Transformer Language Models☆32Updated 3 years ago
- Code for paper 'Minimizing FLOPs to Learn Efficient Sparse Representations' published at ICLR 2020☆20Updated 5 years ago
- A supplementary code for Editable Neural Networks, an ICLR 2020 submission.☆46Updated 5 years ago
- PyTorch implementation of Pay Attention to MLPs☆40Updated 3 years ago
- Sparse Attention with Linear Units☆17Updated 4 years ago
- Official implementation for paper "Relational Surrogate Loss Learning", ICLR 2022☆36Updated 2 years ago
- ☆37Updated 2 years ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆49Updated 4 years ago
- ☆57Updated 4 years ago
- ☆21Updated 2 years ago
- [EMNLP 2022] Language Model Pre-Training with Sparse Latent Typing☆14Updated 2 years ago
- ☆14Updated 2 years ago