anthropics / toy-models-of-superpositionLinks
Notebooks accompanying Anthropic's "Toy Models of Superposition" paper
☆125Updated 2 years ago
Alternatives and similar repositories for toy-models-of-superposition
Users that are interested in toy-models-of-superposition are comparing it to the libraries listed below
Sorting:
- ☆121Updated last year
- Mechanistic Interpretability Visualizations using React☆251Updated 5 months ago
- ☆222Updated 8 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆180Updated this week
- ☆116Updated 9 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆200Updated 5 months ago
- Sparse Autoencoder Training Library☆50Updated last month
- Tools for studying developmental interpretability in neural networks.☆90Updated 4 months ago
- A library for efficient patching and automatic circuit discovery.☆65Updated last month
- ☆120Updated 6 months ago
- ☆96Updated 3 months ago
- ☆39Updated 3 weeks ago
- ☆66Updated 2 years ago
- Open source replication of Anthropic's Crosscoders for Model Diffing☆55Updated 7 months ago
- ☆27Updated last year
- 🧠 Starter templates for doing interpretability research☆70Updated last year
- METR Task Standard☆147Updated 3 months ago
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆181Updated last year
- ☆96Updated last month
- Using sparse coding to find distributed representations used by neural networks.☆247Updated last year
- Mechanistic Interpretability for Transformer Models☆51Updated 3 years ago
- Sparse Autoencoder for Mechanistic Interpretability☆248Updated 10 months ago
- ☆130Updated 2 months ago
- ☆93Updated 3 months ago
- ☆31Updated last year
- ☆75Updated 3 months ago
- ☆42Updated 6 months ago
- ☆170Updated last month
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆213Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆189Updated last year