[EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models
☆85Mar 5, 2024Updated 2 years ago
Alternatives and similar repositories for SoRA
Users that are interested in SoRA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆374Jun 1, 2023Updated 2 years ago
- [NAACL 24 Oral] LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models☆39Jan 9, 2025Updated last year
- ☆221Nov 25, 2025Updated 4 months ago
- ☆232Jun 24, 2024Updated last year
- ☆153Sep 9, 2024Updated last year
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- ☆125Jul 6, 2024Updated last year
- Awesome-Low-Rank-Adaptation☆127Oct 13, 2024Updated last year
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆419Jun 30, 2025Updated 9 months ago
- ☆26Nov 23, 2023Updated 2 years ago
- ☆33Jul 8, 2024Updated last year
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆179Jan 29, 2026Updated 2 months ago
- Official PyTorch implementation of QA-LoRA☆146Mar 13, 2024Updated 2 years ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆472Apr 21, 2024Updated last year
- ☆27Jun 29, 2025Updated 9 months ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- ICLR 2025☆31May 21, 2025Updated 10 months ago
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation☆958Mar 24, 2026Updated 3 weeks ago
- ☆44Jul 22, 2024Updated last year
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆101May 30, 2023Updated 2 years ago
- Supporting code for ReCEval paper☆32Sep 14, 2024Updated last year
- ☆22Jun 11, 2024Updated last year
- Official implementation of ICML'24 paper "LQER: Low-Rank Quantization Error Reconstruction for LLMs"☆19Jul 11, 2024Updated last year
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆40Jul 1, 2023Updated 2 years ago
- ☆64Oct 17, 2023Updated 2 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- Codes for Merging Large Language Models☆35Aug 7, 2024Updated last year
- This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.☆107Jul 1, 2024Updated last year
- This repository contains the implementation of the paper "MeteoRA: Multiple-tasks Embedded LoRA for Large Language Models".☆25May 28, 2025Updated 10 months ago
- ☆19Nov 30, 2024Updated last year
- [ACL 2024 Findings] Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early Pruning☆13Sep 2, 2024Updated last year
- [NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model☆89Nov 28, 2023Updated 2 years ago
- Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"☆57Aug 25, 2024Updated last year
- Codebase for Instruction Following without Instruction Tuning☆36Sep 24, 2024Updated last year
- Cue-CoT: Chain-of-thought Prompting for Responding to In-depth Dialogue Questions with LLMs [EMNLP 2023 Findings]☆24Nov 18, 2023Updated 2 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆131Aug 18, 2022Updated 3 years ago
- Official implementation of "Universal Deep Image Compression via Content-Adaptive Optimization with Adapters" presented at WACV 23 (https…☆31Sep 17, 2023Updated 2 years ago
- AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning (Zhou et al.; TACL 2024)☆51Mar 17, 2024Updated 2 years ago
- ☆177Jul 22, 2024Updated last year
- MiSS is a novel PEFT method that features a low-rank structure but introduces a new update mechanism distinct from LoRA, achieving an exc…☆35Mar 9, 2026Updated last month
- This is the official code for the paper "Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning" (NeurIPS2024)☆26Sep 10, 2024Updated last year
- Task Compass: Scaling Multi-task Pre-training with Task Prefix (EMNLP 2022: Findings) (stay tuned & more will be updated)☆22Oct 17, 2022Updated 3 years ago