ironjr / grokfastLinks
Official repository for the paper "Grokfast: Accelerated Grokking by Amplifying Slow Gradients"
☆563Updated last year
Alternatives and similar repositories for grokfast
Users that are interested in grokfast are comparing it to the libraries listed below
Sorting:
- The AdEMAMix Optimizer: Better, Faster, Older.☆186Updated last year
- Efficient optimizers☆261Updated last month
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆290Updated 3 months ago
- Annotated version of the Mamba paper☆489Updated last year
- Getting crystal-like representations with harmonic loss☆194Updated 5 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆909Updated 4 months ago
- ☆102Updated last month
- ☆210Updated 9 months ago
- A repository for log-time feedforward networks☆223Updated last year
- Official JAX implementation of xLSTM including fast and efficient training and inference code. 7B model available at https://huggingface.…☆102Updated 8 months ago
- DeMo: Decoupled Momentum Optimization☆190Updated 9 months ago
- ☆120Updated 8 months ago
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆293Updated 2 months ago
- Normalized Transformer (nGPT)☆188Updated 9 months ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆322Updated 10 months ago
- Open weights language model from Google DeepMind, based on Griffin.☆651Updated 3 months ago
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"☆427Updated 9 months ago
- For optimization algorithm research and development.☆536Updated this week
- Pretraining and inference code for a large-scale depth-recurrent language model☆827Updated last week
- Our solution for the arc challenge 2024☆176Updated 3 months ago
- Implementation of Diffusion Transformer (DiT) in JAX☆291Updated last year
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆349Updated last year
- Reasoning Computers. Lambda Calculus, Fully Differentiable. Also Neural Stacks, Queues, Arrays, Lists, Trees, and Latches.☆272Updated 10 months ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆559Updated 8 months ago
- 🧱 Modula software package☆237Updated last month
- PyTorch Code for Energy-Based Transformers paper -- generalizable reasoning and scalable learning☆492Updated 2 weeks ago
- H-Net: Hierarchical Network with Dynamic Chunking☆713Updated last month
- Reverse Engineering the Abstraction and Reasoning Corpus☆304Updated 6 months ago
- The boundary of neural network trainability is fractal☆215Updated last year
- ☆279Updated last year