TobiasLee / Awesome-Efficient-PLM
Must-read papers on improving efficiency for pre-trained language models.
☆102Updated 2 years ago
Alternatives and similar repositories for Awesome-Efficient-PLM:
Users that are interested in Awesome-Efficient-PLM are comparing it to the libraries listed below
- Notes of my introduction about NLP in Fudan University☆37Updated 3 years ago
- ☆32Updated 3 years ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆192Updated last year
- Source code for our AAAI'22 paper 《From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression》☆23Updated 3 years ago
- Code for ACL 2023 paper titled "Lifting the Curse of Capacity Gap in Distilling Language Models"☆28Updated last year
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆114Updated 2 years ago
- Source code for our EMNLP'21 paper 《Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning》☆57Updated 3 years ago
- ☆78Updated 2 years ago
- A pre-trained model with multi-exit transformer architecture.☆55Updated 2 years ago
- domain adaptation in NLP☆52Updated 3 years ago
- ☆65Updated 9 months ago
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆45Updated 2 years ago
- ☆95Updated 4 months ago
- 本文旨在整理文本生成领域国内外工业界和企业家的研究者和研究机构。排名不分先后。更新中,欢迎大家补充☆48Updated 4 years ago
- [KDD'22] Learned Token Pruning for Transformers☆96Updated last year
- Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"☆40Updated 2 years ago
- [NeurIPS 2022] "A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models", Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li…☆21Updated last year
- ☆53Updated 2 years ago
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"☆45Updated 2 years ago
- ☆116Updated 2 years ago
- Code associated with the paper **SkipBERT: Efficient Inference with Shallow Layer Skipping**, at ACL 2022☆16Updated 2 years ago
- Code for the RecAdam paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.☆115Updated 4 years ago
- A paper list of pre-trained language models (PLMs).☆80Updated 3 years ago
- Pretrain CPM-1☆51Updated 3 years ago
- ICML'2022: Black-Box Tuning for Language-Model-as-a-Service & EMNLP'2022: BBTv2: Towards a Gradient-Free Future with Large Language Model…☆265Updated 2 years ago
- Code for ACL 2022 paper "BERT Learns to Teach: Knowledge Distillation with Meta Learning".☆84Updated 2 years ago
- reStructured Pre-training☆98Updated 2 years ago
- Group Meeting Record for Baobao Chang Group in Peking University☆25Updated 3 years ago
- 擂台赛3-大规模预训练调优比赛的示例代码与baseline实现☆38Updated 2 years ago
- ☆39Updated last year