A library for mechanistic interpretability of GPT-style language models
☆3,112Updated this week
Alternatives and similar repositories for TransformerLens
Users that are interested in TransformerLens are comparing it to the libraries listed below
Sorting:
- Training Sparse Autoencoders on Language Models☆1,219Updated this week
- Mechanistic Interpretability Visualizations using React☆326Dec 18, 2024Updated last year
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆825Updated this week
- ☆271Oct 1, 2024Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆245Dec 16, 2024Updated last year
- Sparsify transformers with SAEs and transcoders☆696Updated this week
- ☆944Updated this week
- ☆395Aug 21, 2025Updated 6 months ago
- Tools for understanding how transformer predictions are built layer-by-layer☆567Aug 7, 2025Updated 6 months ago
- Using sparse coding to find distributed representations used by neural networks.☆297Nov 10, 2023Updated 2 years ago
- ☆209Oct 14, 2025Updated 4 months ago
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆863Jan 29, 2026Updated last month
- Sparse Autoencoder for Mechanistic Interpretability☆292Jul 20, 2024Updated last year
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆243Updated this week
- ☆140Aug 4, 2024Updated last year
- ☆571Jul 19, 2024Updated last year
- ☆132Oct 28, 2023Updated 2 years ago
- ViT Prisma is a mechanistic interpretability library for Vision and Video Transformers (ViTs).☆340Jul 23, 2025Updated 7 months ago
- ☆199Nov 17, 2024Updated last year
- ☆150Dec 30, 2025Updated last month
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆238Aug 11, 2025Updated 6 months ago
- Representation Engineering: A Top-Down Approach to AI Transparency☆951Aug 14, 2024Updated last year
- ☆1,071Mar 6, 2024Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆212May 23, 2024Updated last year
- Locating and editing factual associations in GPT (NeurIPS 2022)☆728Apr 20, 2024Updated last year
- This repository collects all relevant resources about interpretability in LLMs☆390Nov 1, 2024Updated last year
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆351Jun 13, 2025Updated 8 months ago
- A curated list of Large Language Model (LLM) Interpretability resources.☆1,473Jun 22, 2025Updated 8 months ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆217Feb 16, 2026Updated last week
- ☆4,110Jun 4, 2024Updated last year
- A framework for few-shot evaluation of language models.☆11,478Feb 15, 2026Updated last week
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,740Nov 15, 2025Updated 3 months ago
- open source interpretability platform 🧠☆729Updated this week
- Tools for studying developmental interpretability in neural networks.☆125Dec 30, 2025Updated 2 months ago
- Performant framework for training, analyzing and visualizing Sparse Autoencoders (SAEs) and their frontier variants.☆194Feb 18, 2026Updated last week
- A library for efficient patching and automatic circuit discovery.☆90Dec 31, 2025Updated 2 months ago
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆202Jul 12, 2023Updated 2 years ago
- ☆2,618Updated this week
- Erasing concepts from neural representations with provable guarantees☆243Jan 27, 2025Updated last year