TransformerLensOrg / CircuitsVisLinks
Mechanistic Interpretability Visualizations using React
☆272Updated 7 months ago
Alternatives and similar repositories for CircuitsVis
Users that are interested in CircuitsVis are comparing it to the libraries listed below
Sorting:
- ☆233Updated 10 months ago
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆619Updated this week
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆207Updated 7 months ago
- ☆123Updated last year
- ☆320Updated 2 weeks ago
- Sparse Autoencoder for Mechanistic Interpretability☆257Updated last year
- Sparsify transformers with SAEs and transcoders☆595Updated this week
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆219Updated last year
- Tools for understanding how transformer predictions are built layer-by-layer☆512Updated last year
- Training Sparse Autoencoders on Language Models☆895Updated this week
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆200Updated this week
- Using sparse coding to find distributed representations used by neural networks.☆261Updated last year
- ☆107Updated 2 weeks ago
- Tools for studying developmental interpretability in neural networks.☆100Updated last month
- ☆154Updated 8 months ago
- ☆50Updated 8 months ago
- ☆183Updated 2 weeks ago
- ☆274Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingface☆119Updated 5 months ago
- ☆121Updated 11 months ago
- ☆81Updated 5 months ago
- A toolkit for describing model features and intervening on those features to steer behavior.☆195Updated 8 months ago
- Decoder only transformer, built from scratch with PyTorch☆30Updated last year
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆127Updated 2 years ago
- Erasing concepts from neural representations with provable guarantees☆231Updated 6 months ago
- Unified access to Large Language Model modules using NNsight☆32Updated last week
- METR Task Standard☆154Updated 5 months ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆207Updated last week
- Steering Llama 2 with Contrastive Activation Addition☆167Updated last year
- Open source replication of Anthropic's Crosscoders for Model Diffing☆57Updated 9 months ago