Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"
☆123Apr 28, 2024Updated 2 years ago
Alternatives and similar repositories for DoRA
Users that are interested in DoRA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Implementation of DoRA☆310Jun 7, 2024Updated last year
- [NAACL 24 Oral] LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models☆39Jan 9, 2025Updated last year
- ☆235Jun 11, 2024Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆86Mar 5, 2024Updated 2 years ago
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation☆963Mar 24, 2026Updated last month
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆233Jun 24, 2024Updated last year
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆381Jun 1, 2023Updated 2 years ago
- Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM☆14Dec 27, 2023Updated 2 years ago
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆35Feb 19, 2025Updated last year
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,233Mar 10, 2024Updated 2 years ago
- fixed official code for paper "A Closer Look at Parameter-Efficient Tuning in Diffusion Models".☆43Jun 5, 2023Updated 2 years ago
- This is a repository for "PMET: Precise Model Editing in a Transformer"☆56Sep 28, 2023Updated 2 years ago
- Code Repository for the NeurIPS 2024 Paper "Toward Efficient Inference for Mixture of Experts".☆19Oct 30, 2024Updated last year
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆39Sep 24, 2024Updated last year
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- ☆204Dec 5, 2024Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,689Oct 28, 2024Updated last year
- [ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"☆102Apr 10, 2024Updated 2 years ago
- Modified Beam Search with periodical restart☆12Sep 12, 2024Updated last year
- PB-LLM: Partially Binarized Large Language Models☆155Nov 20, 2023Updated 2 years ago
- A fork of the PEFT library, supporting Robust Adaptation (RoSA)☆15Aug 16, 2024Updated last year
- LoRA and DoRA from Scratch Implementations☆220Mar 5, 2024Updated 2 years ago
- [TMLR 2025] When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models☆126Mar 6, 2026Updated last month
- [SIGIR'24] The official implementation code of MOELoRA.☆192Jul 22, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆474Apr 21, 2024Updated 2 years ago
- ☆44Jul 22, 2024Updated last year
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆39Mar 11, 2024Updated 2 years ago
- ☆277Oct 31, 2023Updated 2 years ago
- ☆221Nov 25, 2025Updated 5 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆465Apr 18, 2024Updated 2 years ago
- Faster Pytorch bitsandbytes 4bit fp4 nn.Linear ops☆30Mar 16, 2024Updated 2 years ago
- WeGeFT: Weight‑Generative Fine‑Tuning for Multi‑Faceted Efficient Adaptation of Large Models☆23Jul 10, 2025Updated 9 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆150Sep 10, 2023Updated 2 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆21Sep 5, 2023Updated 2 years ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆45Feb 13, 2024Updated 2 years ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- [NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model☆89Nov 28, 2023Updated 2 years ago
- ☆179Jul 22, 2024Updated last year
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆69Mar 7, 2024Updated 2 years ago
- Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?☆118Oct 21, 2024Updated last year