danielmamay / grokkingLinks
Implementation of OpenAI's 'Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets' paper.
☆39Updated 2 years ago
Alternatives and similar repositories for grokking
Users that are interested in grokking are comparing it to the libraries listed below
Sorting:
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆79Updated 3 years ago
- nanoGPT-like codebase for LLM training☆109Updated last week
- ☆45Updated 2 years ago
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆193Updated last year
- Sparse and discrete interpretability tool for neural networks☆64Updated last year
- Omnigrok: Grokking Beyond Algorithmic Data☆62Updated 2 years ago
- A centralized place for deep thinking code and experiments☆87Updated 2 years ago
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆19Updated 11 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆133Updated 10 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆85Updated last year
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆137Updated last year
- ☆81Updated last year
- ☆166Updated 2 years ago
- ☆33Updated 10 months ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆89Updated last year
- ☆185Updated last year
- ☆32Updated 7 months ago
- ☆56Updated last year
- Fluid Language Model Benchmarking☆20Updated 2 months ago
- Universal Neurons in GPT2 Language Models☆31Updated last year
- Evaluation of neuro-symbolic engines☆39Updated last year
- ☆11Updated 2 years ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆172Updated 4 months ago
- Sparse Autoencoder Training Library☆55Updated 6 months ago
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆20Updated 3 weeks ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆52Updated 2 years ago
- ☆53Updated last year
- ☆83Updated 2 years ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆233Updated 3 months ago