Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793
☆453May 13, 2025Updated 9 months ago
Alternatives and similar repositories for Adam-mini
Users that are interested in Adam-mini are comparing it to the libraries listed below
Sorting:
- Code for the paper: Why Transformers Need Adam: A Hessian Perspective☆63Mar 11, 2025Updated 11 months ago
- Muon is an optimizer for hidden layers in neural networks☆2,329Jan 19, 2026Updated last month
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,678Oct 28, 2024Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆203Jul 17, 2024Updated last year
- Schedule-Free Optimization in PyTorch☆2,257May 21, 2025Updated 9 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆92Oct 30, 2024Updated last year
- ☆252Dec 2, 2024Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Apr 13, 2025Updated 10 months ago
- ☆585Sep 23, 2025Updated 5 months ago
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆409Jun 30, 2025Updated 8 months ago
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆981Jan 30, 2024Updated 2 years ago
- ☆138Aug 19, 2024Updated last year
- ☆67Mar 21, 2025Updated 11 months ago
- GoldFinch and other hybrid transformer components☆45Jul 20, 2024Updated last year
- ☆91Aug 18, 2024Updated last year
- A PyTorch native platform for training generative AI models☆5,098Updated this week
- Official implementation of Half-Quadratic Quantization (HQQ)☆913Dec 18, 2025Updated 2 months ago
- Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?☆119Oct 21, 2024Updated last year
- A fork of the PEFT library, supporting Robust Adaptation (RoSA)☆15Aug 16, 2024Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆266Oct 3, 2025Updated 5 months ago
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆588Feb 11, 2025Updated last year
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆260Aug 9, 2025Updated 6 months ago
- DeMo: Decoupled Momentum Optimization☆198Dec 2, 2024Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆237Jun 15, 2025Updated 8 months ago
- Muon is Scalable for LLM Training☆1,440Aug 3, 2025Updated 7 months ago
- MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning☆361Aug 7, 2024Updated last year
- When it comes to optimizers, it's always better to be safe than sorry☆404Sep 26, 2025Updated 5 months ago
- Minimalistic large language model 3D-parallelism training☆2,579Feb 19, 2026Updated last week
- The Prodigy optimizer and its variants for training neural networks.☆450Jan 16, 2025Updated last year
- Fast, Modern, and Low Precision PyTorch Optimizers☆125Dec 29, 2025Updated 2 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆341Feb 23, 2025Updated last year
- Efficient optimizers☆285Dec 20, 2025Updated 2 months ago
- Efficient Triton Kernels for LLM Training☆6,162Updated this week
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,428Updated this week
- The AdEMAMix Optimizer: Better, Faster, Older.☆186Sep 12, 2024Updated last year
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆327Nov 26, 2025Updated 3 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆144Apr 8, 2025Updated 10 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆156Apr 7, 2025Updated 10 months ago
- Official PyTorch Implementation for Paper "No More Adam: Learning Rate Scaling at Initialization is All You Need"☆56Jan 27, 2025Updated last year