Guitaricet / reloraView external linksLinks
Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
☆473Apr 21, 2024Updated last year
Alternatives and similar repositories for relora
Users that are interested in relora are comparing it to the libraries listed below
Sorting:
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Mar 2, 2024Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,672Oct 28, 2024Updated last year
- Official PyTorch implementation of QA-LoRA☆145Mar 13, 2024Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,668Apr 17, 2024Updated last year
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,143Jan 11, 2024Updated 2 years ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆396Feb 24, 2024Updated last year
- MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning☆362Aug 7, 2024Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆39Nov 1, 2024Updated last year
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,696Aug 14, 2024Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆668Jul 22, 2024Updated last year
- Minimalistic large language model 3D-parallelism training☆2,544Dec 11, 2025Updated 2 months ago
- ☆235Jun 11, 2024Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆204Aug 10, 2024Updated last year
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"☆457Sep 6, 2023Updated 2 years ago
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆409Jun 30, 2025Updated 7 months ago
- ☆231Jun 24, 2024Updated last year
- ☆34Aug 23, 2023Updated 2 years ago
- Tools for merging pretrained large language models.☆6,783Jan 26, 2026Updated 2 weeks ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆426Dec 20, 2023Updated 2 years ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,897Jan 21, 2024Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,835Jun 10, 2024Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆7,939Jan 22, 2026Updated 3 weeks ago
- Salesforce open-source LLMs with 8k sequence length.☆723Jan 31, 2025Updated last year
- LOMO: LOw-Memory Optimization☆987Jul 2, 2024Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆409May 17, 2024Updated last year
- ☆273Oct 31, 2023Updated 2 years ago
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆887Nov 26, 2025Updated 2 months ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆366Jun 1, 2023Updated 2 years ago
- ☆26Nov 23, 2023Updated 2 years ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆562Dec 28, 2024Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆159Feb 9, 2024Updated 2 years ago
- Scaling Data-Constrained Language Models☆341Jun 28, 2025Updated 7 months ago
- batched loras☆349Sep 6, 2023Updated 2 years ago
- Serving multiple LoRA finetuned LLM as one☆1,139May 8, 2024Updated last year
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,555Jan 14, 2026Updated last month
- This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.☆106Jul 1, 2024Updated last year
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,936Mar 14, 2024Updated last year
- ☆82Apr 16, 2024Updated last year