TransformerLensOrg / CircuitsVis
Mechanistic Interpretability Visualizations using React
☆239Updated 3 months ago
Alternatives and similar repositories for CircuitsVis:
Users that are interested in CircuitsVis are comparing it to the libraries listed below
- ☆217Updated 6 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆194Updated 4 months ago
- Sparse Autoencoder for Mechanistic Interpretability☆239Updated 8 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆166Updated this week
- ☆121Updated last year
- Using sparse coding to find distributed representations used by neural networks.☆230Updated last year
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆209Updated last year
- ☆270Updated 2 months ago
- ☆83Updated this week
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆539Updated last week
- Sparsify transformers with SAEs and transcoders☆511Updated last week
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆118Updated 2 years ago
- Training Sparse Autoencoders on Language Models☆724Updated this week
- ☆91Updated 4 months ago
- ☆157Updated last week
- ☆115Updated 8 months ago
- Tools for understanding how transformer predictions are built layer-by-layer☆485Updated 10 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆94Updated last month
- 🧠 Starter templates for doing interpretability research☆70Updated last year
- Tools for studying developmental interpretability in neural networks.☆87Updated 2 months ago
- A library for efficient patching and automatic circuit discovery.☆62Updated 2 months ago
- Erasing concepts from neural representations with provable guarantees☆228Updated 2 months ago
- ☆70Updated last month
- Steering Llama 2 with Contrastive Activation Addition☆137Updated 10 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆92Updated last year
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆30Updated 10 months ago
- Open source replication of Anthropic's Crosscoders for Model Diffing☆49Updated 5 months ago
- ☆35Updated last month
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆198Updated last week
- ☆26Updated last year