neelnanda-io / GrokkingLinks
A Mechanistic Interpretability Analysis of Grokking
☆24Updated 3 years ago
Alternatives and similar repositories for Grokking
Users that are interested in Grokking are comparing it to the libraries listed below
Sorting:
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆132Updated 3 years ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆198Updated last year
- Open source interpretability artefacts for R1.☆165Updated 8 months ago
- ☆64Updated 3 weeks ago
- Learning Universal Predictors☆81Updated last year
- Accompanying codebase for neuroscope.io, a website for displaying max activating dataset examples for language model neurons☆13Updated 2 years ago
- ☆150Updated 4 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆83Updated last year
- ☆31Updated 9 months ago
- ☆132Updated 2 years ago
- Sparse Autoencoder Training Library☆56Updated 8 months ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆28Updated last year
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆123Updated last year
- ☆112Updated 10 months ago
- Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable.☆174Updated 2 years ago
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆197Updated 2 years ago
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆81Updated 3 years ago
- A MAD laboratory to improve AI architecture designs 🧪☆135Updated last year
- Tools for studying developmental interpretability in neural networks.☆119Updated last week
- ☆127Updated 2 months ago
- we got you bro☆36Updated last year
- ☆24Updated 8 months ago
- ☆17Updated 3 weeks ago
- AlgoTune is a NeurIPS 2025 benchmark made up of 154 math, physics, and computer science problems. The goal is write code that solves each…☆80Updated this week
- ☆54Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆236Updated last year
- Harmonic Datasets☆52Updated last year
- Open source replication of Anthropic's Crosscoders for Model Diffing☆63Updated last year
- gzip Predicts Data-dependent Scaling Laws☆34Updated last year
- Applying SAEs for fine-grained control☆25Updated last year