QingruZhang / AdaLoRALinks
AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).
☆365Updated 2 years ago
Alternatives and similar repositories for AdaLoRA
Users that are interested in AdaLoRA are comparing it to the libraries listed below
Sorting:
- ☆218Updated 2 months ago
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆395Updated last year
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆408Updated 7 months ago
- ☆175Updated last year
- ☆196Updated last year
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆233Updated last year
- [SIGIR'24] The official implementation code of MOELoRA.☆186Updated last year
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆512Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆84Updated last year
- ☆273Updated 2 years ago
- Must-read Papers of Parameter-Efficient Tuning (Delta Tuning) Methods on Pre-trained Models.☆286Updated 2 years ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆203Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,004Updated last year
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆542Updated 3 years ago
- ☆125Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆666Updated last year
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆250Updated 10 months ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆169Updated this week
- Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities. ACM Computing Surveys, 2025.☆659Updated this week
- ☆64Updated last year
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆456Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆409Updated last year
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆190Updated last year
- Official implementation of TransNormerLLM: A Faster and Better LLM☆249Updated 2 years ago
- ☆152Updated last year
- Rectified Rotary Position Embeddings☆387Updated last year
- A curated reading list of research in Mixture-of-Experts(MoE).☆659Updated last year
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆639Updated last year
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆143Updated 9 months ago