anthropics / toy-models-of-superpositionLinks
Notebooks accompanying Anthropic's "Toy Models of Superposition" paper
☆126Updated 2 years ago
Alternatives and similar repositories for toy-models-of-superposition
Users that are interested in toy-models-of-superposition are comparing it to the libraries listed below
Sorting:
- ☆121Updated last year
- ☆226Updated 8 months ago
- Mechanistic Interpretability Visualizations using React☆255Updated 6 months ago
- Sparse Autoencoder Training Library☆52Updated last month
- ☆134Updated 2 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆202Updated 6 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆184Updated last week
- A library for efficient patching and automatic circuit discovery.☆66Updated 2 months ago
- Tools for studying developmental interpretability in neural networks.☆94Updated 4 months ago
- ☆119Updated 10 months ago
- Open source replication of Anthropic's Crosscoders for Model Diffing☆55Updated 7 months ago
- ☆57Updated last week
- 🧠 Starter templates for doing interpretability research☆70Updated last year
- ☆67Updated 2 years ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆216Updated last year
- Sparse Autoencoder for Mechanistic Interpretability☆250Updated 11 months ago
- ☆28Updated last year
- Attribution-based Parameter Decomposition☆25Updated last week
- ☆44Updated 7 months ago
- Using sparse coding to find distributed representations used by neural networks.☆253Updated last year
- ☆101Updated 2 weeks ago
- ☆26Updated 2 years ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆106Updated 4 months ago
- ☆97Updated 4 months ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆206Updated last week
- Mechanistic Interpretability for Transformer Models☆51Updated 3 years ago
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆181Updated last year
- ☆95Updated 4 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆75Updated 6 months ago
- ☆129Updated 7 months ago