likenneth / othello_world
Emergent world representations: Exploring a sequence model trained on a synthetic task
☆170Updated last year
Related projects ⓘ
Alternatives and complementary repositories for othello_world
- ☆99Updated 3 months ago
- ☆108Updated last year
- ☆188Updated last month
- ☆44Updated this week
- ☆106Updated last month
- Mechanistic Interpretability Visualizations using React☆200Updated 4 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆158Updated last month
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆97Updated 2 years ago
- Mechanistic Interpretability for Transformer Models☆49Updated 2 years ago
- ☆109Updated this week
- ☆71Updated 3 months ago
- ☆76Updated 9 months ago
- Using sparse coding to find distributed representations used by neural networks.☆188Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingface☆65Updated this week
- A repository for transformer critique learning and generation☆86Updated 11 months ago
- ☆128Updated 10 months ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆186Updated this week
- ☆74Updated 4 months ago
- ☆253Updated 8 months ago
- Tools for understanding how transformer predictions are built layer-by-layer☆432Updated 5 months ago
- Inspecting and Editing Knowledge Representations in Language Models☆108Updated last year
- ☆54Updated 2 years ago
- [NeurIPS 2023] Learning Transformer Programs☆157Updated 6 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆61Updated last week
- A library for efficient patching and automatic circuit discovery.☆31Updated last month
- ☆331Updated 4 months ago
- Tools for studying developmental interpretability in neural networks.☆77Updated last week
- ☆146Updated last month
- ☆24Updated 7 months ago
- Sparse and discrete interpretability tool for neural networks☆55Updated 9 months ago