AlignmentResearch / tuned-lens
Tools for understanding how transformer predictions are built layer-by-layer
☆475Updated 8 months ago
Alternatives and similar repositories for tuned-lens:
Users that are interested in tuned-lens are comparing it to the libraries listed below
- ☆262Updated 11 months ago
- ☆203Updated 4 months ago
- Using sparse coding to find distributed representations used by neural networks.☆213Updated last year
- Sparsify transformers with SAEs and transcoders☆461Updated this week
- ☆190Updated 11 months ago
- Mechanistic Interpretability Visualizations using React☆232Updated 2 months ago
- Erasing concepts from neural representations with provable guarantees☆222Updated 3 weeks ago
- ☆151Updated this week
- Training Sparse Autoencoders on Language Models☆619Updated this week
- ☆109Updated 6 months ago
- ☆116Updated last year
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆498Updated 3 weeks ago
- ☆243Updated last week
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆182Updated 2 months ago
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆698Updated this week
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆462Updated last year
- ☆421Updated 7 months ago
- Locating and editing factual associations in GPT (NeurIPS 2022)☆604Updated 10 months ago
- Sparse Autoencoder for Mechanistic Interpretability☆216Updated 7 months ago
- Editing Models with Task Arithmetic☆451Updated last year
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆175Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆123Updated 8 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆255Updated last year
- Function Vectors in Large Language Models (ICLR 2024)☆138Updated 4 months ago
- Extract full next-token probabilities via language model APIs☆229Updated 11 months ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆195Updated last week
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆185Updated 8 months ago
- Scaling Data-Constrained Language Models☆333Updated 4 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆88Updated this week
- This repository collects all relevant resources about interpretability in LLMs☆321Updated 3 months ago