cambridgeltl / autopeft
AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning (Zhou et al.; TACL)
☆42Updated 7 months ago
Related projects ⓘ
Alternatives and complementary repositories for autopeft
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆44Updated last year
- ☆107Updated 3 months ago
- ☆31Updated last year
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆52Updated last month
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆29Updated last year
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆97Updated last year
- Retrieval as Attention☆82Updated last year
- ☆61Updated 2 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆79Updated last year
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆133Updated 2 years ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆34Updated 7 months ago
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models☆47Updated last month
- [NAACL 24 Oral] LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models☆27Updated 2 months ago
- ☆44Updated 10 months ago
- The source code of the EMNLP 2023 main conference paper: Sparse Low-rank Adaptation of Pre-trained Language Models.☆69Updated 8 months ago
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆38Updated last year
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated last year
- Long Context Extension and Generalization in LLMs☆39Updated last month
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆58Updated 2 years ago
- Repository for "Propagating Knowledge Updates to LMs Through Distillation" (NeurIPS 2023).☆24Updated 2 months ago
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆41Updated last year
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆63Updated last year
- Repo for ACL2023 Findings paper "Emergent Modularity in Pre-trained Transformers"☆19Updated last year
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆97Updated 2 years ago
- Offical code of the paper Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Le…☆68Updated 7 months ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆69Updated 8 months ago
- Code release for Dataless Knowledge Fusion by Merging Weights of Language Models (https://openreview.net/forum?id=FCnohuR6AnM)☆84Updated last year
- Less is More: Task-aware Layer-wise Distillation for Language Model Compression (ICML2023)☆28Updated last year
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆15Updated 5 months ago
- Restore safety in fine-tuned language models through task arithmetic☆25Updated 7 months ago