ironjr / grokfast
Official repository for the paper "Grokfast: Accelerated Grokking by Amplifying Slow Gradients"
☆543Updated 8 months ago
Alternatives and similar repositories for grokfast:
Users that are interested in grokfast are comparing it to the libraries listed below
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆210Updated this week
- The AdEMAMix Optimizer: Better, Faster, Older.☆178Updated 5 months ago
- Annotated version of the Mamba paper☆474Updated last year
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆273Updated 3 months ago
- Efficient optimizers☆177Updated last week
- A repository for log-time feedforward networks☆220Updated 10 months ago
- ☆161Updated 3 months ago
- Official implementation of "Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling"☆849Updated last week
- For optimization algorithm research and development.☆497Updated this week
- Muon optimizer: +>30% sample efficiency with <3% wallclock overhead☆434Updated this week
- Official JAX implementation of xLSTM including fast and efficient training and inference code. 7B model available at https://huggingface.…☆84Updated last month
- Implementation of Diffusion Transformer (DiT) in JAX☆265Updated 8 months ago
- Open weights language model from Google DeepMind, based on Griffin.☆622Updated last week
- Sparsify transformers with SAEs and transcoders☆476Updated this week
- ☆212Updated 7 months ago
- Normalized Transformer (nGPT)☆156Updated 3 months ago
- supporting pytorch FSDP for optimizers☆77Updated 2 months ago
- Reasoning Computers. Lambda Calculus, Fully Differentiable. Also Neural Stacks, Queues, Arrays, Lists, Trees, and Latches.☆246Updated 4 months ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆294Updated 4 months ago
- ☆100Updated 2 months ago
- Schedule-Free Optimization in PyTorch☆2,105Updated this week
- Draw more samples☆186Updated 8 months ago
- DeMo: Decoupled Momentum Optimization☆181Updated 3 months ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆548Updated 2 months ago
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆114Updated 9 months ago
- Code repository for Black Mamba☆239Updated last year
- The boundary of neural network trainability is fractal☆195Updated last year
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"☆417Updated 2 months ago