cambridgeltl / autopeftLinks
AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning (Zhou et al.; TACL 2024)
☆45Updated last year
Alternatives and similar repositories for autopeft
Users that are interested in autopeft are comparing it to the libraries listed below
Sorting:
- This is the oficial repository for "Parameter-Efficient Multi-task Tuning via Attentional Mixtures of Soft Prompts" (EMNLP 2022)☆101Updated 2 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆51Updated 2 years ago
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 2 years ago
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆32Updated last year
- This package implements THOR: Transformer with Stochastic Experts.☆63Updated 3 years ago
- ☆131Updated 10 months ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated last month
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆70Updated 6 months ago
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated last year
- Learning adapter weights from task descriptions☆18Updated last year
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆39Updated last year
- ☆35Updated last year
- ☆54Updated 2 years ago
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆59Updated 3 years ago
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆30Updated 2 years ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆76Updated last year
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆46Updated 2 years ago
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆23Updated 11 months ago
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆25Updated 6 months ago
- Repo for ACL2023 Findings paper "Emergent Modularity in Pre-trained Transformers"☆23Updated 2 years ago
- Lightweight Adapting for Black-Box Large Language Models☆22Updated last year
- Code release for Dataless Knowledge Fusion by Merging Weights of Language Models (https://openreview.net/forum?id=FCnohuR6AnM)☆89Updated last year
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆46Updated last year
- ☆179Updated last year
- Long Context Extension and Generalization in LLMs☆56Updated 8 months ago
- ☆67Updated 3 years ago
- Implementation of Gradient Information Optimization (GIO) for effective and scalable training data selection☆14Updated last year
- Official PyTorch Implementation of EMoE: Unlocking Emergent Modularity in Large Language Models [main conference @ NAACL2024]☆31Updated last year
- Codebase for Hyperdecoders https://arxiv.org/abs/2203.08304☆11Updated 2 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year