timaeus-research / devinterpLinks
Tools for studying developmental interpretability in neural networks.
☆126Updated last month
Alternatives and similar repositories for devinterp
Users that are interested in devinterp are comparing it to the libraries listed below
Sorting:
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆238Updated 5 months ago
- Mechanistic Interpretability Visualizations using React☆320Updated last year
- ☆132Updated 2 years ago
- ☆267Updated last year
- 🧠 Starter templates for doing interpretability research☆76Updated 2 years ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆135Updated 3 years ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆238Updated last year
- we got you bro☆37Updated last year
- ☆70Updated last month
- ☆36Updated last year
- Attribution-based Parameter Decomposition☆33Updated 7 months ago
- (Model-written) LLM evals library☆18Updated last year
- Stochastic Parameter Decomposition☆63Updated this week
- Sparse Autoencoder for Mechanistic Interpretability☆290Updated last year
- ☆29Updated last year
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆153Updated this week
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆201Updated 2 years ago
- METR Task Standard☆173Updated last year
- A library for bridging Python and HTML/Javascript (via Svelte) for creating interactive visualizations☆14Updated last year
- Sparse Autoencoder Training Library☆56Updated 9 months ago
- Erasing concepts from neural representations with provable guarantees☆243Updated last year
- ☆21Updated 2 years ago
- Machine Learning for Alignment Bootcamp☆81Updated 3 years ago
- Redwood Research's transformer interpretability tools☆15Updated 3 years ago
- Mechanistic Interpretability for Transformer Models☆53Updated 3 years ago
- ☆284Updated last year
- ☆66Updated 2 years ago
- ☆389Updated 5 months ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆217Updated last week
- ☆152Updated 5 months ago