ironjr / grokfast
Official repository for the paper "Grokfast: Accelerated Grokking by Amplifying Slow Gradients"
☆555Updated 9 months ago
Alternatives and similar repositories for grokfast:
Users that are interested in grokfast are comparing it to the libraries listed below
- Annotated version of the Mamba paper☆481Updated last year
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆279Updated last month
- DeMo: Decoupled Momentum Optimization☆186Updated 4 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆862Updated 2 months ago
- Normalized Transformer (nGPT)☆168Updated 5 months ago
- Efficient optimizers☆189Updated this week
- Muon optimizer: +>30% sample efficiency with <3% wallclock overhead☆575Updated 3 weeks ago
- For optimization algorithm research and development.☆507Updated this week
- Getting crystal-like representations with harmonic loss☆182Updated 2 weeks ago
- Simple, minimal implementation of the Mamba SSM in one pytorch file. Using logcumsumexp (Heisen sequence).☆112Updated 6 months ago
- ☆93Updated 3 months ago
- The AdEMAMix Optimizer: Better, Faster, Older.☆180Updated 7 months ago
- 🧱 Modula software package☆188Updated 3 weeks ago
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆548Updated 2 months ago
- Implementation of Diffusion Transformer (DiT) in JAX☆270Updated 10 months ago
- ☆173Updated 4 months ago
- Implementation of https://srush.github.io/annotated-s4☆489Updated 2 years ago
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆116Updated 10 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆405Updated this week
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"☆424Updated 4 months ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆304Updated 5 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆510Updated 5 months ago
- ☆108Updated 3 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆566Updated this week
- ☆215Updated 9 months ago
- Open weights language model from Google DeepMind, based on Griffin.☆636Updated 2 months ago
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆342Updated 8 months ago
- Helpful tools and examples for working with flex-attention☆720Updated last week
- ☆302Updated 9 months ago
- UNet diffusion model in pure CUDA☆601Updated 9 months ago