calpt / awesome-adapter-resourcesLinks
Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning
☆193Updated last year
Alternatives and similar repositories for awesome-adapter-resources
Users that are interested in awesome-adapter-resources are comparing it to the libraries listed below
Sorting:
- ☆179Updated last year
- ☆259Updated last year
- Must-read Papers on Large Language Model (LLM) Continual Learning☆141Updated last year
- ☆181Updated 8 months ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆76Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆162Updated last year
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆142Updated 2 years ago
- A curated list of Model Merging methods.☆92Updated 8 months ago
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. arXiv:2408.07666.☆417Updated this week
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆294Updated 2 months ago
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆529Updated 3 years ago
- A Survey on Data Selection for Language Models☆234Updated last month
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆123Updated last year
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆401Updated 8 months ago
- ☆138Updated 10 months ago
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆173Updated 2 months ago
- PyTorch codes for "LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning"☆238Updated 2 years ago
- A framework for merging models solving different tasks with different initializations into one multi-task model without any additional tr…☆300Updated last year
- [ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"☆97Updated last year
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆82Updated 7 months ago
- Awesome Learn From Model Beyond Fine-Tuning: A Survey☆63Updated 5 months ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆178Updated 8 months ago
- ☆128Updated 2 years ago
- Code for "SemDeDup", a simple method for identifying and removing semantic duplicates from a dataset (data pairs which are semantically s…☆136Updated last year
- Awesome-Low-Rank-Adaptation☆102Updated 7 months ago
- The Paper List on Data Contamination for Large Language Models Evaluation.☆95Updated 2 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆69Updated 7 months ago
- Residual Prompt Tuning: a method for faster and better prompt tuning.☆54Updated 2 years ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆417Updated last year
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆30Updated 2 years ago