morningmoni / UniPELTLinks
Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022
☆61Updated 3 years ago
Alternatives and similar repositories for UniPELT
Users that are interested in UniPELT are comparing it to the libraries listed below
Sorting:
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆102Updated 2 years ago
- ☆156Updated 3 years ago
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- Released code for our ICLR23 paper.☆65Updated 2 years ago
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated last year
- [ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"☆95Updated last year
- Code for "Small Models are Valuable Plug-ins for Large Language Models"☆130Updated 2 years ago
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆142Updated 2 years ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated last year
- ☆64Updated 2 years ago
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆47Updated 2 years ago
- The code for lifelong few-shot language learning☆55Updated 3 years ago
- ☆129Updated 2 years ago
- Code for ACL 2022 paper "BERT Learns to Teach: Knowledge Distillation with Meta Learning".☆86Updated 2 years ago
- ☆35Updated last year
- contrastive decoding☆202Updated 2 years ago
- [NAACL 2022] "Learning to Win Lottery Tickets in BERT Transfer via Task-agnostic Mask Training", Yuanxin Liu, Fandong Meng, Zheng Lin, Pe…☆15Updated 2 years ago
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆108Updated 3 years ago
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆71Updated 8 months ago
- Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"☆41Updated 2 years ago
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆31Updated 2 years ago
- Code for "Inducer-tuning: Connecting Prefix-tuning and Adapter-tuning" (EMNLP 2022) and "Empowering Parameter-Efficient Transfer Learning…☆11Updated 2 years ago
- [ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"