Sea-Snell / grokkingLinks
unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"
☆78Updated 3 years ago
Alternatives and similar repositories for grokking
Users that are interested in grokking are comparing it to the libraries listed below
Sorting:
- Implementation of OpenAI's 'Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets' paper.☆38Updated last year
- nanoGPT-like codebase for LLM training☆102Updated 2 months ago
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆186Updated 2 years ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆127Updated 2 years ago
- ☆184Updated last year
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- Scaling scaling laws with board games.☆51Updated 2 years ago
- A centralized place for deep thinking code and experiments☆85Updated last year
- Neural Networks and the Chomsky Hierarchy☆207Updated last year
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated last year
- ☆83Updated last year
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆62Updated 4 years ago
- ☆166Updated 2 years ago
- Omnigrok: Grokking Beyond Algorithmic Data☆60Updated 2 years ago
- ☆26Updated 2 years ago
- A MAD laboratory to improve AI architecture designs 🧪☆123Updated 7 months ago
- [NeurIPS 2023] Learning Transformer Programs☆162Updated last year
- Mechanistic Interpretability for Transformer Models☆51Updated 3 years ago
- ☆68Updated 2 years ago
- ☆53Updated last year
- Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implici…☆108Updated last year
- Redwood Research's transformer interpretability tools☆14Updated 3 years ago
- LoRA for arbitrary JAX models and functions☆140Updated last year
- ☆28Updated last year
- Interpreting how transformers simulate agents performing RL tasks☆87Updated last year
- Train very large language models in Jax.☆206Updated last year
- ☆234Updated last year
- ☆124Updated last year
- Code associated to papers on superposition (in ML interpretability)☆29Updated 2 years ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆149Updated last month