Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793
☆453May 13, 2025Updated 10 months ago
Alternatives and similar repositories for Adam-mini
Users that are interested in Adam-mini are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Code for the paper: Why Transformers Need Adam: A Hessian Perspective☆63Mar 11, 2025Updated last year
- Muon is an optimizer for hidden layers in neural networks☆2,398Jan 19, 2026Updated 2 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆204Jul 17, 2024Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,681Oct 28, 2024Updated last year
- ☆256Dec 2, 2024Updated last year
- Schedule-Free Optimization in PyTorch☆2,265May 21, 2025Updated 10 months ago
- ☆18Oct 30, 2025Updated 4 months ago
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆986Jan 30, 2024Updated 2 years ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆92Oct 30, 2024Updated last year
- [NeurIPS 2024] BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models☆285Mar 15, 2025Updated last year
- ☆591Sep 23, 2025Updated 6 months ago
- A fork of the PEFT library, supporting Robust Adaptation (RoSA)☆15Aug 16, 2024Updated last year
- GoldFinch and other hybrid transformer components☆45Jul 20, 2024Updated last year
- ☆91Aug 18, 2024Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Apr 13, 2025Updated 11 months ago
- Muon is Scalable for LLM Training☆1,446Aug 3, 2025Updated 7 months ago
- A PyTorch native platform for training generative AI models☆5,162Updated this week
- ☆138Aug 19, 2024Updated last year
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆201Dec 16, 2023Updated 2 years ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆242Jun 15, 2025Updated 9 months ago
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆415Jun 30, 2025Updated 8 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆262Aug 9, 2025Updated 7 months ago
- ☆68Mar 21, 2025Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆270Oct 3, 2025Updated 5 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆919Feb 26, 2026Updated 3 weeks ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆144Apr 8, 2025Updated 11 months ago
- Minimalistic large language model 3D-parallelism training☆2,617Feb 19, 2026Updated last month
- [ICML 2025] From Low Rank Gradient Subspace Stabilization to Low-Rank Weights: Observations, Theories and Applications☆52Oct 30, 2025Updated 4 months ago
- [WIP] Transformer to embed Danbooru labelsets☆13Mar 31, 2024Updated last year
- Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?☆118Oct 21, 2024Updated last year
- Efficient optimizers☆294Mar 16, 2026Updated last week
- APOLLO: SGD-like Memory, AdamW-level Performance; MLSys'25 Oustanding Paper Honorable Mention☆271Nov 29, 2025Updated 3 months ago
- [ICLR 2026] When it comes to optimizers, it's always better to be safe than sorry☆408Sep 26, 2025Updated 5 months ago
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,154Jan 11, 2024Updated 2 years ago
- DeMo: Decoupled Momentum Optimization☆198Dec 2, 2024Updated last year
- Efficient Triton Kernels for LLM Training☆6,216Updated this week
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆589Feb 11, 2025Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollars☆989Jul 23, 2024Updated last year
- The Prodigy optimizer and its variants for training neural networks.☆453Jan 16, 2025Updated last year