neelnanda-io / GrokkingLinks
A Mechanistic Interpretability Analysis of Grokking
☆26Updated 3 years ago
Alternatives and similar repositories for Grokking
Users that are interested in Grokking are comparing it to the libraries listed below
Sorting:
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆133Updated 3 years ago
- ☆152Updated 4 months ago
- 🧠 Starter templates for doing interpretability research☆76Updated 2 years ago
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆199Updated 2 years ago
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆83Updated 3 years ago
- we got you bro☆37Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆198Updated last year
- Learning Universal Predictors☆81Updated last year
- Open source interpretability artefacts for R1.☆169Updated 9 months ago
- Tools for studying developmental interpretability in neural networks.☆124Updated last month
- Accompanying codebase for neuroscope.io, a website for displaying max activating dataset examples for language model neurons☆13Updated 2 years ago
- ☆28Updated 2 years ago
- ☆132Updated 2 years ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆28Updated last year
- Universal Neurons in GPT2 Language Models☆30Updated last year
- ☆112Updated 11 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆238Updated last year
- ☆65Updated last week
- Open source replication of Anthropic's Crosscoders for Model Diffing☆63Updated last year
- ☆36Updated last year
- Sparse Autoencoder Training Library☆56Updated 8 months ago
- Attribution-based Parameter Decomposition☆33Updated 7 months ago
- ☆29Updated last year
- Latent Program Network (from the "Searching Latent Program Spaces" paper)☆107Updated 2 months ago
- Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable.☆174Updated 2 years ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆138Updated 11 months ago
- ☆31Updated 10 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆83Updated last year
- Code associated to papers on superposition (in ML interpretability)☆35Updated 3 years ago
- Materials for ConceptARC paper☆112Updated last year