PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)
☆409Jun 30, 2025Updated 8 months ago
Alternatives and similar repositories for PiSSA
Users that are interested in PiSSA are comparing it to the libraries listed below
Sorting:
- ☆217Nov 25, 2025Updated 3 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆144Apr 8, 2025Updated 10 months ago
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation☆936Oct 1, 2024Updated last year
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆172Jan 29, 2026Updated last month
- CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for task-aware parameter-efficient fine-tuning(NeurIPS 2024)☆53Jan 13, 2025Updated last year
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆369Jun 1, 2023Updated 2 years ago
- ☆64Apr 9, 2024Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,677Oct 28, 2024Updated last year
- MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning☆361Aug 7, 2024Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆473Apr 21, 2024Updated last year
- ☆24Dec 11, 2024Updated last year
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆453May 13, 2025Updated 9 months ago
- ☆235Jun 11, 2024Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆84Mar 5, 2024Updated last year
- ☆233Jun 24, 2024Updated last year
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,558Jan 14, 2026Updated last month
- ☆152Sep 9, 2024Updated last year
- Codebase for Instruction Following without Instruction Tuning☆36Sep 24, 2024Updated last year
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆24Jun 26, 2024Updated last year
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆31Jan 28, 2026Updated last month
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning. COLM 2024 Accepted Paper☆32May 29, 2024Updated last year
- ☆16Jul 23, 2024Updated last year
- GoldFinch and other hybrid transformer components☆45Jul 20, 2024Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆203Jul 17, 2024Updated last year
- [ACL 2024] Progressive LLaMA with Block Expansion.☆514May 20, 2024Updated last year
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆459Apr 18, 2024Updated last year
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆434Updated this week
- ☆35Feb 10, 2025Updated last year
- [NAACL 2025] MiLoRA: Harnessing Minor Singular Components for Parameter-Efficient LLM Finetuning☆19May 31, 2025Updated 9 months ago
- ☆13Jan 22, 2025Updated last year
- ☆43Jul 22, 2024Updated last year
- Official repository for ORPO☆471May 31, 2024Updated last year
- [TMLR 2025] When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models☆125Feb 15, 2026Updated 2 weeks ago
- FuseAI Project☆590Jan 25, 2025Updated last year
- [NAACL 24 Oral] LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models☆39Jan 9, 2025Updated last year
- ☆15Nov 7, 2024Updated last year
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆67Mar 27, 2025Updated 11 months ago
- ☆19Jan 3, 2025Updated last year
- [NeurIPS 2024] BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models☆286Mar 15, 2025Updated 11 months ago