HomebrewML / HeavyBallLinks
Efficient optimizers
☆261Updated last month
Alternatives and similar repositories for HeavyBall
Users that are interested in HeavyBall are comparing it to the libraries listed below
Sorting:
- supporting pytorch FSDP for optimizers☆84Updated 9 months ago
- ☆208Updated 9 months ago
- 🧱 Modula software package☆233Updated 3 weeks ago
- Accelerated First Order Parallel Associative Scan☆187Updated last year
- ☆279Updated last year
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆291Updated last month
- A library for unit scaling in PyTorch☆130Updated 2 months ago
- The AdEMAMix Optimizer: Better, Faster, Older.☆186Updated last year
- An implementation of PSGD Kron second-order optimizer for PyTorch☆96Updated last month
- For optimization algorithm research and development.☆534Updated last week
- WIP☆94Updated last year
- ☆87Updated last year
- A repository for log-time feedforward networks☆223Updated last year
- ☆307Updated last year
- Universal Tensor Operations in Einstein-Inspired Notation for Python.☆408Updated 5 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆157Updated 2 months ago
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆290Updated 3 months ago
- Dion optimizer algorithm☆338Updated last week
- Fast, Modern, and Low Precision PyTorch Optimizers☆109Updated last week
- Scalable and Performant Data Loading☆296Updated last week
- Getting crystal-like representations with harmonic loss☆194Updated 5 months ago
- ☆65Updated 9 months ago
- DeMo: Decoupled Momentum Optimization☆190Updated 9 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- LoRA for arbitrary JAX models and functions☆142Updated last year
- Load compute kernels from the Hub☆271Updated this week
- Simple implementation of muP, based on Spectral Condition for Feature Learning. The implementation is SGD only, dont use it for Adam☆85Updated last year
- seqax = sequence modeling + JAX☆166Updated last month
- research impl of Native Sparse Attention (2502.11089)☆61Updated 6 months ago
- Focused on fast experimentation and simplicity☆75Updated 8 months ago