jbloomAus / SAELensLinks
Training Sparse Autoencoders on Language Models
☆900Updated last week
Alternatives and similar repositories for SAELens
Users that are interested in SAELens are comparing it to the libraries listed below
Sorting:
- Sparsify transformers with SAEs and transcoders☆598Updated last week
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆622Updated last week
- ☆326Updated 3 weeks ago
- Mechanistic Interpretability Visualizations using React☆273Updated 7 months ago
- Sparse Autoencoder for Mechanistic Interpretability☆257Updated last year
- ☆507Updated last year
- Using sparse coding to find distributed representations used by neural networks.☆261Updated last year
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆786Updated this week
- ☆234Updated 10 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆210Updated 7 months ago
- A library for mechanistic interpretability of GPT-style language models☆2,437Updated this week
- ☆646Updated last week
- Tools for understanding how transformer predictions are built layer-by-layer☆512Updated last year
- ☆109Updated 3 weeks ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆221Updated last year
- This repository collects all relevant resources about interpretability in LLMs☆368Updated 9 months ago
- ☆157Updated 8 months ago
- ☆125Updated last year
- ☆183Updated 3 weeks ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆202Updated this week
- ☆51Updated 8 months ago
- Representation Engineering: A Top-Down Approach to AI Transparency☆855Updated 11 months ago
- ☆180Updated 8 months ago
- ☆122Updated last year
- ViT Prisma is a mechanistic interpretability library for Vision and Video Transformers (ViTs).☆292Updated 2 weeks ago
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆251Updated last month
- Decoder only transformer, built from scratch with PyTorch☆31Updated last year
- ☆274Updated last year
- An Open Source Implementation of Anthropic's Paper: "Towards Monosemanticity: Decomposing Language Models with Dictionary Learning"☆48Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆170Updated last year