☆252Dec 2, 2024Updated last year
Alternatives and similar repositories for SOAP
Users that are interested in SOAP are comparing it to the libraries listed below
Sorting:
- Efficient optimizers☆285Dec 20, 2025Updated 2 months ago
- ☆70Nov 15, 2024Updated last year
- For optimization algorithm research and development.☆557Updated this week
- WIP☆94Aug 13, 2024Updated last year
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆190Jan 11, 2026Updated last month
- Unofficial JAX implementation of the SOAP optimizer (https://arxiv.org/abs/2409.11321)☆24Jan 9, 2026Updated last month
- An implementation of PSGD Kron second-order optimizer for PyTorch☆98Jul 24, 2025Updated 7 months ago
- ☆10Jun 27, 2024Updated last year
- Schedule-Free Optimization in PyTorch☆2,257May 21, 2025Updated 9 months ago
- ☆29Mar 14, 2025Updated 11 months ago
- Focused on fast experimentation and simplicity☆80Dec 24, 2024Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆92Oct 30, 2024Updated last year
- Simple (fast) transformer inference in PyTorch with torch.compile + lit-llama code☆10Aug 29, 2023Updated 2 years ago
- ☆15Mar 2, 2025Updated last year
- ☆55Feb 24, 2026Updated last week
- A library for unit scaling in PyTorch☆133Jul 11, 2025Updated 7 months ago
- Grams: Gradient Descent with Adaptive Momentum Scaling (ICLR 2025 Workshop)☆17Mar 6, 2025Updated 11 months ago
- Muon is an optimizer for hidden layers in neural networks☆2,350Jan 19, 2026Updated last month
- ☆63Oct 3, 2024Updated last year
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆453May 13, 2025Updated 9 months ago
- ☆22Nov 9, 2024Updated last year
- 🧱 Modula software package☆323Aug 18, 2025Updated 6 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆18Jul 24, 2025Updated 7 months ago
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"☆435Dec 12, 2024Updated last year
- ☆53May 20, 2024Updated last year
- Supporting code for the blog post on modular manifolds.☆117Sep 26, 2025Updated 5 months ago
- Experiments on the impact of depth in transformers and SSMs.☆40Oct 23, 2025Updated 4 months ago
- DeMo: Decoupled Momentum Optimization☆198Dec 2, 2024Updated last year
- ☆67Mar 21, 2025Updated 11 months ago
- Understand and test language model architectures on synthetic tasks.☆257Feb 24, 2026Updated last week
- ☆19Dec 4, 2025Updated 3 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆188Jan 19, 2026Updated last month
- supporting pytorch FSDP for optimizers☆84Dec 8, 2024Updated last year
- ☆34Sep 10, 2024Updated last year
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆66Nov 18, 2025Updated 3 months ago
- Code for "What really matters in matrix-whitening optimizers?"☆22Oct 31, 2025Updated 4 months ago
- Approximating the joint distribution of language models via MCTS☆22Nov 3, 2024Updated last year
- Code for the paper "Function-Space Learning Rates"☆25Jun 3, 2025Updated 9 months ago
- 📄Small Batch Size Training for Language Models☆80Oct 4, 2025Updated 5 months ago