neelnanda-io / Grokking
A Mechanistic Interpretability Analysis of Grokking
โ21Updated 2 years ago
Alternatives and similar repositories for Grokking
Users that are interested in Grokking are comparing it to the libraries listed below
Sorting:
- Code for reproducing our paper "Not All Language Model Features Are Linear"โ74Updated 5 months ago
- ๐ง Starter templates for doing interpretability researchโ70Updated last year
- Accompanying codebase for neuroscope.io, a website for displaying max activating dataset examples for language model neuronsโ12Updated 2 years ago
- โ27Updated last year
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"โ78Updated 2 years ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)โ188Updated 11 months ago
- we got you broโ35Updated 9 months ago
- โ121Updated last year
- Mechanistic Interpretability for Transformer Modelsโ50Updated 2 years ago
- โ26Updated 2 years ago
- Harmonic Datasetsโ39Updated 10 months ago
- A library for bridging Python and HTML/Javascript (via Svelte) for creating interactive visualizationsโ14Updated last year
- A MAD laboratory to improve AI architecture designs ๐งชโ115Updated 5 months ago
- โ54Updated 7 months ago
- Open source interpretability artefacts for R1.โ131Updated 3 weeks ago
- Evaluation of neuro-symbolic enginesโ35Updated 9 months ago
- โ29Updated 3 months ago
- โ94Updated 3 months ago
- Implementation of OpenAI's 'Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets' paper.โ36Updated last year
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from eโฆโ26Updated 11 months ago
- Universal Neurons in GPT2 Language Modelsโ29Updated 11 months ago
- โ129Updated last month
- Code associated to papers on superposition (in ML interpretability)โ28Updated 2 years ago
- Sparse Autoencoder Training Libraryโ49Updated 2 weeks ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.โ120Updated last week
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).โ201Updated 5 months ago
- โ38Updated 2 weeks ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paperโ122Updated 2 years ago
- โ13Updated 6 months ago
- Tools for studying developmental interpretability in neural networks.โ89Updated 3 months ago