☆221Nov 25, 2025Updated 5 months ago
Alternatives and similar repositories for LoRA-GA
Users that are interested in LoRA-GA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆148Apr 8, 2025Updated last year
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆423Jun 30, 2025Updated 10 months ago
- ☆233Jun 24, 2024Updated last year
- [ICML2025 Oral] LoRA-One: One-Step Full Gradient Could Suffice for Fine-Tuning Large Language Models, Provably and Efficiently☆31Oct 22, 2025Updated 6 months ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆86Mar 5, 2024Updated 2 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- ☆44Jul 22, 2024Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,690Oct 28, 2024Updated last year
- [ICLR 2025] RaSA: Rank-Sharing Low-Rank Adaptation☆10May 19, 2025Updated 11 months ago
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation☆966Mar 24, 2026Updated last month
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆381Jun 1, 2023Updated 2 years ago
- ☆126Jul 6, 2024Updated last year
- Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?☆118Oct 21, 2024Updated last year
- ☆36Aug 23, 2023Updated 2 years ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆474Apr 21, 2024Updated 2 years ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆24Jun 26, 2024Updated last year
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆52Oct 20, 2025Updated 6 months ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆181Jan 29, 2026Updated 3 months ago
- [NeurIPS 2024] VeLoRA : Memory Efficient Training using Rank-1 Sub-Token Projections☆21Oct 15, 2024Updated last year
- ☆17Dec 7, 2025Updated 4 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆238Dec 3, 2024Updated last year
- [NeurIPS 2024] "Mind the Gap between Prototypes and Images in Cross-domain Finetuning"☆11Nov 15, 2024Updated last year
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆35Feb 19, 2025Updated last year
- [NAACL 24 Oral] LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models☆39Jan 9, 2025Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters☆56Aug 2, 2025Updated 9 months ago
- [AAAI 2024] FedDAT: An Approach for Foundation Model Finetuning in Multi-Modal Heterogeneous Federated Learning☆66Jan 21, 2024Updated 2 years ago
- MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning☆362Aug 7, 2024Updated last year
- ☆31Jun 6, 2025Updated 11 months ago
- The repo for HiRA paper☆37Jan 9, 2026Updated 3 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆457May 13, 2025Updated 11 months ago
- ☆201Jul 13, 2024Updated last year
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆39Feb 27, 2024Updated 2 years ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆123Apr 28, 2024Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆11Sep 9, 2024Updated last year
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆35Mar 9, 2026Updated last month
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆248Jun 15, 2025Updated 10 months ago
- Code for AAAI 2024 paper: CR-SAM: Curvature Regularized Sharpness-Aware Minimization☆12Nov 29, 2024Updated last year
- ☆22Jan 23, 2024Updated 2 years ago
- ☆26Nov 23, 2023Updated 2 years ago
- Official code for "Algorithmic Capabilities of Random Transformers" (NeurIPS 2024)☆16Sep 28, 2024Updated last year