anthropics / toy-models-of-superpositionLinks
Notebooks accompanying Anthropic's "Toy Models of Superposition" paper
☆133Updated 3 years ago
Alternatives and similar repositories for toy-models-of-superposition
Users that are interested in toy-models-of-superposition are comparing it to the libraries listed below
Sorting:
- ☆132Updated 2 years ago
- ☆76Updated 3 years ago
- ☆28Updated 2 years ago
- Sparse Autoencoder Training Library☆56Updated 9 months ago
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆200Updated 2 years ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆238Updated last year
- ☆152Updated 4 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆238Updated last week
- ☆137Updated last year
- Erasing concepts from neural representations with provable guarantees☆242Updated last year
- A library for efficient patching and automatic circuit discovery.☆88Updated last month
- Sparse and discrete interpretability tool for neural networks☆64Updated last year
- Open source replication of Anthropic's Crosscoders for Model Diffing☆63Updated last year
- Attribution-based Parameter Decomposition☆33Updated 7 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆83Updated last year
- ☆265Updated last year
- Mechanistic Interpretability Visualizations using React☆318Updated last year
- ☆112Updated 11 months ago
- Materials for ConceptARC paper☆112Updated last year
- ☆29Updated last year
- ☆86Updated last month
- 🧠 Starter templates for doing interpretability research☆76Updated 2 years ago
- Applying SAEs for fine-grained control☆25Updated last year
- Tools for studying developmental interpretability in neural networks.☆124Updated last month
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆217Updated last week
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆198Updated last year
- Utilities for the HuggingFace transformers library☆74Updated 3 years ago
- Universal Neurons in GPT2 Language Models☆30Updated last year
- ☆115Updated 11 months ago
- Mechanistic Interpretability for Transformer Models☆53Updated 3 years ago