Official repository for the paper "Grokfast: Accelerated Grokking by Amplifying Slow Gradients"
☆579Jun 28, 2024Updated last year
Alternatives and similar repositories for grokfast
Users that are interested in grokfast are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆104Dec 22, 2024Updated last year
- ☆138Aug 19, 2024Updated last year
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆238Jul 19, 2025Updated 9 months ago
- ☆118Jul 23, 2025Updated 9 months ago
- Deep Networks Grok All the Time and Here is Why☆39Apr 20, 2026Updated 2 weeks ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆201May 28, 2024Updated last year
- Omnigrok: Grokking Beyond Algorithmic Data☆64Feb 24, 2023Updated 3 years ago
- Pretraining and inference code for a large-scale depth-recurrent language model☆883Dec 29, 2025Updated 4 months ago
- The AdEMAMix Optimizer: Better, Faster, Older.☆188Sep 12, 2024Updated last year
- Implementation of OpenAI's 'Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets' paper.☆43May 2, 2026Updated last week
- Implementation for MatMul-free LM.☆3,057Dec 2, 2025Updated 5 months ago
- ☆28Feb 1, 2023Updated 3 years ago
- Schedule-Free Optimization in PyTorch☆2,276May 21, 2025Updated 11 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,690Oct 28, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆293Jun 3, 2025Updated 11 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆958Nov 16, 2025Updated 5 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆458May 13, 2025Updated 11 months ago
- ☆317Jun 21, 2024Updated last year
- Entropy Based Sampling and Parallel CoT Decoding☆3,431Nov 13, 2024Updated last year
- Code to reproduce key results accompanying "SAEs (usually) Transfer Between Base and Chat Models"☆13Jul 18, 2024Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆133Dec 3, 2024Updated last year
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆376Dec 12, 2024Updated last year
- This repository contains the joint use of CPO and SimPO method for better reference-free preference learning methods.☆57Aug 13, 2024Updated last year
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Official implementation of Half-Quadratic Quantization (HQQ)☆933Feb 26, 2026Updated 2 months ago
- DeMo: Decoupled Momentum Optimization☆201Dec 2, 2024Updated last year
- Convolutions for Sequence Modeling☆912Jun 13, 2024Updated last year
- A pure and fast NumPy implementation of Mamba with cache support.☆18Jun 16, 2024Updated last year
- BitLinear implementation☆35Updated this week
- Combining SOAP and MUON☆20Feb 11, 2025Updated last year
- Code for BLT research paper☆2,036Nov 3, 2025Updated 6 months ago
- smol models are fun too☆93Nov 9, 2024Updated last year
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,566Mar 5, 2026Updated 2 months ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- NanoGPT (124M) in 90 seconds☆5,200Updated this week
- Repository for Sparse Universal Transformers☆20Oct 23, 2023Updated 2 years ago
- The Gaussian Histogram Loss (HL-Gauss) proposed by Imani et al. with a few convenient wrappers for regression, in Pytorch☆79Apr 3, 2026Updated last month
- Normalized Transformer (nGPT)☆204Nov 19, 2024Updated last year
- Official repository for the paper "Automating Continual Learning"☆18Jun 11, 2025Updated 10 months ago
- Muon is an optimizer for hidden layers in neural networks☆2,544Jan 19, 2026Updated 3 months ago
- Kolmogorov Arnold Networks☆16,266Jan 19, 2025Updated last year