TransformerLensOrg / TransformerLensLinks
A library for mechanistic interpretability of GPT-style language models
☆2,413Updated this week
Alternatives and similar repositories for TransformerLens
Users that are interested in TransformerLens are comparing it to the libraries listed below
Sorting:
- Training Sparse Autoencoders on Language Models☆895Updated this week
- ☆634Updated this week
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆783Updated last week
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆619Updated this week
- Sparsify transformers with SAEs and transcoders☆595Updated this week
- Representation Engineering: A Top-Down Approach to AI Transparency☆852Updated 11 months ago
- ☆503Updated last year
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆219Updated last year
- Mechanistic Interpretability Visualizations using React☆272Updated 7 months ago
- Tools for understanding how transformer predictions are built layer-by-layer☆512Updated last year
- ☆320Updated 2 weeks ago
- A bibliography and survey of the papers surrounding o1☆1,207Updated 8 months ago
- Sparse Autoencoder for Mechanistic Interpretability☆257Updated last year
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,500Updated 5 months ago
- This repository collects all relevant resources about interpretability in LLMs☆366Updated 9 months ago
- Using sparse coding to find distributed representations used by neural networks.☆261Updated last year
- System 2 Reasoning Link Collection☆848Updated 4 months ago
- Locating and editing factual associations in GPT (NeurIPS 2022)☆653Updated last year
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆809Updated this week
- utilities for decoding deep representations (like sentence embeddings) back to text☆912Updated 2 months ago
- ViT Prisma is a mechanistic interpretability library for Vision and Video Transformers (ViTs).☆289Updated last week
- ☆233Updated 10 months ago
- Language model alignment-focused deep learning curriculum☆1,433Updated 11 months ago
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,575Updated last month
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆207Updated 7 months ago
- A curated list of Large Language Model (LLM) Interpretability resources.☆1,385Updated last month
- procedural reasoning datasets☆998Updated this week
- What would you do with 1000 H100s...☆1,079Updated last year
- ☆177Updated 8 months ago
- ☆1,027Updated last year