Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
☆474Apr 21, 2024Updated last year
Alternatives and similar repositories for relora
Users that are interested in relora are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆34Mar 2, 2024Updated 2 years ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,681Oct 28, 2024Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,686Apr 17, 2024Updated last year
- Official PyTorch implementation of QA-LoRA☆145Mar 13, 2024Updated 2 years ago
- ☆34Aug 23, 2023Updated 2 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,154Jan 11, 2024Updated 2 years ago
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆39Nov 1, 2024Updated last year
- MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning☆361Aug 7, 2024Updated last year
- ☆233Jun 24, 2024Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆668Jul 22, 2024Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆397Feb 24, 2024Updated 2 years ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆206Aug 10, 2024Updated last year
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆372Jun 1, 2023Updated 2 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆415Jun 30, 2025Updated 8 months ago
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,697Aug 14, 2024Updated last year
- ☆235Jun 11, 2024Updated last year
- Minimalistic large language model 3D-parallelism training☆2,617Feb 19, 2026Updated last month
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,858Jun 10, 2024Updated last year
- Tools for merging pretrained large language models.☆6,895Mar 15, 2026Updated last week
- ☆26Nov 23, 2023Updated 2 years ago
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"☆456Sep 6, 2023Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆426Dec 20, 2023Updated 2 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,903Jan 21, 2024Updated 2 years ago
- ☆219Nov 25, 2025Updated 4 months ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆85Mar 5, 2024Updated 2 years ago
- This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.☆106Jul 1, 2024Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆8,078Updated this week
- LOMO: LOw-Memory Optimization☆989Jul 2, 2024Updated last year
- ☆71Jul 11, 2024Updated last year
- Salesforce open-source LLMs with 8k sequence length.☆726Jan 31, 2025Updated last year
- ☆274Oct 31, 2023Updated 2 years ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆159Feb 9, 2024Updated 2 years ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,932Mar 14, 2024Updated 2 years ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆412May 17, 2024Updated last year
- [ICLR‘24 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆104Jun 20, 2025Updated 9 months ago
- batched loras☆351Sep 6, 2023Updated 2 years ago
- ☆19Jan 3, 2025Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆563Dec 28, 2024Updated last year