☆267Dec 2, 2024Updated last year
Alternatives and similar repositories for SOAP
Users that are interested in SOAP are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Efficient optimizers☆314Apr 27, 2026Updated last week
- ☆71Nov 15, 2024Updated last year
- For optimization algorithm research and development.☆565Apr 10, 2026Updated 3 weeks ago
- ☆32Mar 14, 2025Updated last year
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆195Apr 3, 2026Updated last month
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- WIP☆95Aug 13, 2024Updated last year
- An implementation of PSGD Kron second-order optimizer for PyTorch☆100Jul 24, 2025Updated 9 months ago
- ☆10Jun 27, 2024Updated last year
- Schedule-Free Optimization in PyTorch☆2,277May 21, 2025Updated 11 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆457May 13, 2025Updated 11 months ago
- Unofficial JAX implementation of the SOAP optimizer (https://arxiv.org/abs/2409.11321)☆25Jan 9, 2026Updated 3 months ago
- Combining SOAP and MUON☆20Feb 11, 2025Updated last year
- ☆62Apr 8, 2026Updated 3 weeks ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆92Oct 30, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Grams: Gradient Descent with Adaptive Momentum Scaling (ICLR 2025 Workshop)☆17Mar 6, 2025Updated last year
- Muon is an optimizer for hidden layers in neural networks☆2,544Jan 19, 2026Updated 3 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆19Jul 24, 2025Updated 9 months ago
- ☆15Mar 2, 2025Updated last year
- Source code for the paper "Positional Attention: Expressivity and Learnability of Algorithmic Computation"☆14May 26, 2025Updated 11 months ago
- Simple (fast) transformer inference in PyTorch with torch.compile + lit-llama code☆10Aug 29, 2023Updated 2 years ago
- Code for the paper: Why Transformers Need Adam: A Hessian Perspective☆65Mar 11, 2025Updated last year
- ☆19Dec 4, 2025Updated 5 months ago
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"☆437Dec 12, 2024Updated last year
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- 🧱 Modula software package☆327Aug 18, 2025Updated 8 months ago
- DeMo: Decoupled Momentum Optimization☆201Dec 2, 2024Updated last year
- ☆54May 20, 2024Updated last year
- ☆63Oct 3, 2024Updated last year
- ☆22Nov 9, 2024Updated last year
- A library for unit scaling in PyTorch☆133Jul 11, 2025Updated 9 months ago
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆66Nov 18, 2025Updated 5 months ago
- ☆13Apr 27, 2026Updated last week
- ☆183Updated this week
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Utilities for PyTorch distributed☆25Feb 27, 2025Updated last year
- Minimal but scalable implementation of large language models in JAX☆35Nov 28, 2025Updated 5 months ago
- Code for "What really matters in matrix-whitening optimizers?"☆23Oct 31, 2025Updated 6 months ago
- [NeurIPS 2025] Official Pytorch Implementation of "The Curse of Depth in Large Language Models" by Wenfang Sun, Xinyuan Song, Pengxiang L…☆70Mar 3, 2026Updated 2 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆133Jun 24, 2025Updated 10 months ago
- ☆125Jun 11, 2025Updated 10 months ago
- Experiments on the impact of depth in transformers and SSMs.☆40Oct 23, 2025Updated 6 months ago