nanowell / AdEMAMix-Optimizer-Pytorch
The AdEMAMix Optimizer: Better, Faster, Older.
☆178Updated 4 months ago
Alternatives and similar repositories for AdEMAMix-Optimizer-Pytorch:
Users that are interested in AdEMAMix-Optimizer-Pytorch are comparing it to the libraries listed below
- ☆146Updated last month
- Efficient optimizers☆144Updated this week
- supporting pytorch FSDP for optimizers☆75Updated last month
- 94% on CIFAR-10 in 2.6 seconds 💨 96% in 27 seconds☆195Updated last month
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆270Updated 2 months ago
- Muon optimizer for neural networks: >30% extra sample efficiency, <3% wallclock overhead☆210Updated last week
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"☆408Updated last month
- A repository for log-time feedforward networks☆217Updated 9 months ago
- For optimization algorithm research and development.☆484Updated this week
- Implementation of Diffusion Transformer (DiT) in JAX☆261Updated 7 months ago
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆98Updated last month
- ☆296Updated 6 months ago
- ☆240Updated 4 months ago
- Annotated version of the Mamba paper☆469Updated 10 months ago
- Scalable and Performant Data Loading☆207Updated this week
- ☆149Updated 5 months ago
- Normalized Transformer (nGPT)☆145Updated last month
- Accelerated First Order Parallel Associative Scan☆169Updated 4 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆95Updated 3 weeks ago
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆119Updated 5 months ago
- When it comes to optimizers, it's always better to be safe than sorry☆157Updated this week
- ☆53Updated 11 months ago
- Simple, minimal implementation of the Mamba SSM in one pytorch file. Using logcumsumexp (Heisen sequence).☆104Updated 3 months ago
- ☆78Updated 9 months ago
- 🧱 Modula software package☆132Updated this week
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆292Updated 2 weeks ago
- ☆52Updated 2 months ago
- DeMo: Decoupled Momentum Optimization☆170Updated last month
- Library for Jacobian descent with PyTorch. It enables optimization of neural networks with multiple losses (e.g. multi-task learning).☆184Updated last week
- Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers☆77Updated 6 months ago