AlignmentResearch / tuned-lensLinks
Tools for understanding how transformer predictions are built layer-by-layer
☆549Updated 3 months ago
Alternatives and similar repositories for tuned-lens
Users that are interested in tuned-lens are comparing it to the libraries listed below
Sorting:
- ☆255Updated last year
- ☆283Updated last year
- Mechanistic Interpretability Visualizations using React☆302Updated 11 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆231Updated 11 months ago
- Sparsify transformers with SAEs and transcoders☆665Updated this week
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆834Updated last month
- Using sparse coding to find distributed representations used by neural networks.☆286Updated 2 years ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆228Updated this week
- ☆189Updated last year
- Erasing concepts from neural representations with provable guarantees☆239Updated 10 months ago
- ☆132Updated 2 years ago
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆707Updated this week
- ☆238Updated last year
- ☆549Updated last year
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆214Updated this week
- ☆367Updated 3 months ago
- Sparse Autoencoder for Mechanistic Interpretability☆284Updated last year
- ☆130Updated last year
- ☆136Updated 2 weeks ago
- Steering Llama 2 with Contrastive Activation Addition☆195Updated last year
- This repository collects all relevant resources about interpretability in LLMs☆385Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingface☆130Updated 9 months ago
- Editing Models with Task Arithmetic☆515Updated last year
- Training Sparse Autoencoders on Language Models☆1,075Updated last week
- ☆196Updated last month
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆191Updated 2 years ago
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆532Updated last year
- ☆83Updated 9 months ago
- Interpretability for sequence generation models 🐛 🔍☆447Updated last month
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆556Updated 10 months ago