Official repository for the paper "Grokfast: Accelerated Grokking by Amplifying Slow Gradients"
☆578Jun 28, 2024Updated last year
Alternatives and similar repositories for grokfast
Users that are interested in grokfast are comparing it to the libraries listed below
Sorting:
- ☆138Aug 19, 2024Updated last year
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆234Jul 19, 2025Updated 7 months ago
- ☆113Jul 23, 2025Updated 7 months ago
- Deep Networks Grok All the Time and Here is Why☆38May 18, 2024Updated last year
- Implementation for MatMul-free LM.☆3,056Dec 2, 2025Updated 3 months ago
- The AdEMAMix Optimizer: Better, Faster, Older.☆186Sep 12, 2024Updated last year
- Pretraining and inference code for a large-scale depth-recurrent language model☆865Dec 29, 2025Updated 2 months ago
- Omnigrok: Grokking Beyond Algorithmic Data☆63Feb 24, 2023Updated 3 years ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆952Nov 16, 2025Updated 3 months ago
- Schedule-Free Optimization in PyTorch☆2,262May 21, 2025Updated 9 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,678Oct 28, 2024Updated last year
- ☆316Jun 21, 2024Updated last year
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆453May 13, 2025Updated 9 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆199May 28, 2024Updated last year
- Entropy Based Sampling and Parallel CoT Decoding☆3,432Nov 13, 2024Updated last year
- DeMo: Decoupled Momentum Optimization☆198Dec 2, 2024Updated last year
- Combining SOAP and MUON☆19Feb 11, 2025Updated last year
- Official implementation of Half-Quadratic Quantization (HQQ)☆915Feb 26, 2026Updated last week
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆373Dec 12, 2024Updated last year
- Official Code Repository for the paper "Key-value memory in the brain"☆31Feb 25, 2025Updated last year
- NanoGPT (124M) in 2 minutes☆4,734Feb 27, 2026Updated last week
- Convolutions for Sequence Modeling☆913Jun 13, 2024Updated last year
- Distributed Training Over-The-Internet☆980Oct 14, 2025Updated 4 months ago
- Code to reproduce key results accompanying "SAEs (usually) Transfer Between Base and Chat Models"☆13Jul 18, 2024Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆473Apr 21, 2024Updated last year
- ☆1,033Dec 17, 2024Updated last year
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,560Jan 14, 2026Updated last month
- Code for BLT research paper☆2,029Nov 3, 2025Updated 4 months ago
- A pure and fast NumPy implementation of Mamba with cache support.☆18Jun 16, 2024Updated last year
- Tools for merging pretrained large language models.☆6,842Feb 28, 2026Updated last week
- [ICLR2025 Spotlight🔥] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters☆588Feb 11, 2025Updated last year
- Kolmogorov Arnold Networks☆16,187Jan 19, 2025Updated last year
- Tile primitives for speedy kernels☆3,202Feb 24, 2026Updated last week
- Normalized Transformer (nGPT)☆198Nov 19, 2024Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆133Dec 3, 2024Updated last year
- ☆27Feb 1, 2023Updated 3 years ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,114Mar 2, 2026Updated last week
- Repository for Sparse Universal Transformers☆20Oct 23, 2023Updated 2 years ago
- Efficient Triton Kernels for LLM Training☆6,189Updated this week