AlignmentResearch / tuned-lensLinks
Tools for understanding how transformer predictions are built layer-by-layer
☆567Updated 6 months ago
Alternatives and similar repositories for tuned-lens
Users that are interested in tuned-lens are comparing it to the libraries listed below
Sorting:
- Sparsify transformers with SAEs and transcoders☆688Updated 2 weeks ago
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆857Updated last week
- ☆267Updated last year
- ☆284Updated last year
- Mechanistic Interpretability Visualizations using React☆320Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆238Updated last year
- ☆132Updated 2 years ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆241Updated 2 weeks ago
- ☆245Updated last year
- Erasing concepts from neural representations with provable guarantees☆243Updated last year
- Using sparse coding to find distributed representations used by neural networks.☆293Updated 2 years ago
- ☆389Updated 5 months ago
- Sparse Autoencoder for Mechanistic Interpretability☆290Updated last year
- ☆197Updated last year
- This repository collects all relevant resources about interpretability in LLMs☆391Updated last year
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆800Updated this week
- ☆567Updated last year
- ☆138Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆207Updated last year
- Training Sparse Autoencoders on Language Models☆1,193Updated this week
- Editing Models with Task Arithmetic☆529Updated 2 years ago
- ☆143Updated last month
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆570Updated last year
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆535Updated 2 years ago
- ☆206Updated 3 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆140Updated 11 months ago
- Representation Engineering: A Top-Down Approach to AI Transparency☆945Updated last year
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆217Updated 2 weeks ago
- ☆304Updated 2 years ago
- Interpretability for sequence generation models 🐛 🔍☆453Updated last week