ironjr / grokfastLinks
Official repository for the paper "Grokfast: Accelerated Grokking by Amplifying Slow Gradients"
☆555Updated last year
Alternatives and similar repositories for grokfast
Users that are interested in grokfast are comparing it to the libraries listed below
Sorting:
- The AdEMAMix Optimizer: Better, Faster, Older.☆183Updated 10 months ago
- Efficient optimizers☆234Updated this week
- Annotated version of the Mamba paper☆486Updated last year
- Getting crystal-like representations with harmonic loss☆191Updated 3 months ago
- ☆197Updated 7 months ago
- A repository for log-time feedforward networks☆222Updated last year
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆287Updated last month
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆888Updated 2 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆429Updated 2 months ago
- For optimization algorithm research and development.☆521Updated this week
- Open weights language model from Google DeepMind, based on Griffin.☆644Updated last month
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆555Updated 6 months ago
- Official JAX implementation of xLSTM including fast and efficient training and inference code. 7B model available at https://huggingface.…☆97Updated 6 months ago
- Simple, minimal implementation of the Mamba SSM in one pytorch file. Using logcumsumexp (Heisen sequence).☆120Updated 8 months ago
- ☆116Updated 6 months ago
- ☆273Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆190Updated last year
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆316Updated 8 months ago
- ☆98Updated 5 months ago
- ☆304Updated last year
- Pretraining code for a large-scale depth-recurrent language model☆801Updated this week
- The repository for the code of the UltraFastBERT paper☆516Updated last year
- DeMo: Decoupled Momentum Optimization☆189Updated 7 months ago
- Official implementation of the paper "Linear Transformers with Learnable Kernel Functions are Better In-Context Models"☆161Updated 6 months ago
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆348Updated 11 months ago
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆120Updated last year
- 🧱 Modula software package☆204Updated 3 months ago
- Implementation of Diffusion Transformer (DiT) in JAX☆279Updated last year
- Reasoning Computers. Lambda Calculus, Fully Differentiable. Also Neural Stacks, Queues, Arrays, Lists, Trees, and Latches.☆264Updated 8 months ago
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆263Updated 4 months ago