ironjr / grokfastLinks
Official repository for the paper "Grokfast: Accelerated Grokking by Amplifying Slow Gradients"
☆554Updated 11 months ago
Alternatives and similar repositories for grokfast
Users that are interested in grokfast are comparing it to the libraries listed below
Sorting:
- The AdEMAMix Optimizer: Better, Faster, Older.☆183Updated 8 months ago
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆282Updated 2 months ago
- Efficient optimizers☆208Updated this week
- Annotated version of the Mamba paper☆482Updated last year
- ☆95Updated 4 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆876Updated last month
- Muon: An optimizer for hidden layers in neural networks☆678Updated last week
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆559Updated 3 months ago
- Implementation of Diffusion Transformer (DiT) in JAX☆276Updated 11 months ago
- A repository for log-time feedforward networks☆222Updated last year
- ☆303Updated 11 months ago
- For optimization algorithm research and development.☆518Updated this week
- ☆185Updated 6 months ago
- Pretraining code for a large-scale depth-recurrent language model☆770Updated last week
- The repository for the code of the UltraFastBERT paper☆514Updated last year
- Code repository for Black Mamba☆246Updated last year
- ☆111Updated 5 months ago
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆117Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆550Updated 5 months ago
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆239Updated 3 months ago
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"☆423Updated 5 months ago
- Official JAX implementation of xLSTM including fast and efficient training and inference code. 7B model available at https://huggingface.…☆91Updated 4 months ago
- Simple, minimal implementation of the Mamba SSM in one pytorch file. Using logcumsumexp (Heisen sequence).☆118Updated 7 months ago
- ☆556Updated last month
- Open weights language model from Google DeepMind, based on Griffin.☆639Updated last week
- The boundary of neural network trainability is fractal☆204Updated last year
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆417Updated 3 weeks ago
- ☆267Updated 10 months ago
- Getting crystal-like representations with harmonic loss☆187Updated 2 months ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆310Updated 7 months ago