GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
☆1,678Oct 28, 2024Updated last year
Alternatives and similar repositories for GaLore
Users that are interested in GaLore are comparing it to the libraries listed below
Sorting:
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆203Jul 17, 2024Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆473Apr 21, 2024Updated last year
- Tools for merging pretrained large language models.☆6,826Updated this week
- Training LLMs with QLoRA + FSDP☆1,537Nov 9, 2024Updated last year
- Efficient Triton Kernels for LLM Training☆6,162Feb 27, 2026Updated last week
- Official implementation of Half-Quadratic Quantization (HQQ)☆915Feb 26, 2026Updated last week
- Minimalistic large language model 3D-parallelism training☆2,579Feb 19, 2026Updated 2 weeks ago
- Accessible large language models via k-bit quantization for PyTorch.☆7,997Feb 26, 2026Updated last week
- Robust recipes to align language models with human and AI preferences☆5,510Sep 8, 2025Updated 5 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,315Mar 6, 2025Updated last year
- Schedule-Free Optimization in PyTorch☆2,257May 21, 2025Updated 9 months ago
- Fast and memory-efficient exact attention☆22,460Updated this week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,443Jul 17, 2025Updated 7 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆453May 13, 2025Updated 9 months ago
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,693Aug 14, 2024Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,663Mar 8, 2024Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollars☆988Jul 23, 2024Updated last year
- Official repository of Evolutionary Optimization of Model Merging Recipes☆1,403Nov 29, 2024Updated last year
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,560Jan 14, 2026Updated last month
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,149Jan 11, 2024Updated 2 years ago
- PyTorch native post-training library☆5,691Feb 27, 2026Updated last week
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,899Jan 21, 2024Updated 2 years ago
- This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.☆106Jul 1, 2024Updated last year
- LOMO: LOw-Memory Optimization☆988Jul 2, 2024Updated last year
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆983Jan 30, 2024Updated 2 years ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,673Apr 17, 2024Updated last year
- Mamba SSM architecture☆17,257Feb 18, 2026Updated 2 weeks ago
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,196Jul 11, 2024Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,843Jun 10, 2024Updated last year
- Go ahead and axolotl questions☆11,395Updated this week
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,474Updated this week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,710Jun 25, 2024Updated last year
- Microsoft Automatic Mixed Precision Library☆636Dec 1, 2025Updated 3 months ago
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆409Jun 30, 2025Updated 8 months ago
- A PyTorch native platform for training generative AI models☆5,098Updated this week
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,261Mar 27, 2024Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,144May 8, 2024Updated last year
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,896May 3, 2024Updated last year
- AllenAI's post-training codebase☆3,605Updated this week