TobiasLee / Awesome-Efficient-PLM
Must-read papers on improving efficiency for pre-trained language models.
☆103Updated 2 years ago
Alternatives and similar repositories for Awesome-Efficient-PLM:
Users that are interested in Awesome-Efficient-PLM are comparing it to the libraries listed below
- Source code for our EMNLP'21 paper 《Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning》☆59Updated 3 years ago
- A paper list of pre-trained language models (PLMs).☆80Updated 3 years ago
- Source code for our AAAI'22 paper 《From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression》☆23Updated 3 years ago
- Code for ACL 2023 paper titled "Lifting the Curse of Capacity Gap in Distilling Language Models"☆28Updated last year
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆196Updated last year
- Notes of my introduction about NLP in Fudan University☆37Updated 3 years ago
- ☆33Updated 3 years ago
- A pre-trained model with multi-exit transformer architecture.☆55Updated 2 years ago
- ☆66Updated 10 months ago
- ☆78Updated 2 years ago
- ☆53Updated 2 years ago
- ☆116Updated 2 years ago
- ICML'2022: Black-Box Tuning for Language-Model-as-a-Service & EMNLP'2022: BBTv2: Towards a Gradient-Free Future with Large Language Model…☆267Updated 2 years ago
- ☆56Updated 2 years ago
- Group Meeting Record for Baobao Chang Group in Peking University☆26Updated 3 years ago
- reStructured Pre-training☆98Updated 2 years ago
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆114Updated 2 years ago
- Code associated with the paper **SkipBERT: Efficient Inference with Shallow Layer Skipping**, at ACL 2022☆16Updated 2 years ago
- domain adaptation in NLP☆53Updated 3 years ago
- Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"☆40Updated 2 years ago
- Paradigm shift in natural language processing☆42Updated 2 years ago
- Calculating FLOPs of Pre-trained Models in NLP☆18Updated 3 years ago
- 擂台赛3-大规模预训练调优比赛的示例代码与baseline实现☆38Updated 2 years ago
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"☆47Updated 2 years ago
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆45Updated 2 years ago
- ☆46Updated 3 years ago
- [NeurIPS 2022] "A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models", Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li…☆21Updated last year
- Pretrain CPM-1☆51Updated 3 years ago
- ☆61Updated 2 years ago
- [NAACL'22] TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning☆93Updated 2 years ago