AlignmentResearch / tuned-lens
Tools for understanding how transformer predictions are built layer-by-layer
☆430Updated 5 months ago
Related projects ⓘ
Alternatives and complementary repositories for tuned-lens
- Mechanistic Interpretability Visualizations using React☆198Updated 4 months ago
- ☆253Updated 8 months ago
- ☆188Updated last month
- Erasing concepts from neural representations with provable guarantees☆209Updated last week
- Training Sparse Autoencoders on Language Models☆469Updated this week
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆157Updated last month
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆186Updated this week
- Using sparse coding to find distributed representations used by neural networks.☆184Updated last year
- Sparse autoencoders☆342Updated last week
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆438Updated 9 months ago
- ☆105Updated last month
- ☆170Updated 8 months ago
- ☆108Updated last year
- Scaling Data-Constrained Language Models☆321Updated last month
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆402Updated this week
- Extract full next-token probabilities via language model APIs☆229Updated 8 months ago
- ☆107Updated this week
- Stanford NLP Python Library for Understanding and Improving PyTorch Models via Interventions☆641Updated 2 weeks ago
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆169Updated last year
- ☆328Updated 4 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆252Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆97Updated 5 months ago
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆465Updated last month
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆178Updated 5 months ago
- Understand and test language model architectures on synthetic tasks.☆162Updated 6 months ago
- Interpretability for sequence generation models 🐛 🔍☆377Updated last week
- RuLES: a benchmark for evaluating rule-following in language models☆211Updated last month
- ☆98Updated 3 months ago
- ☆145Updated 3 weeks ago
- This repository collects all relevant resources about interpretability in LLMs☆288Updated 2 weeks ago