jbloomAus / SAELensLinks
Training Sparse Autoencoders on Language Models
☆935Updated this week
Alternatives and similar repositories for SAELens
Users that are interested in SAELens are comparing it to the libraries listed below
Sorting:
- Sparsify transformers with SAEs and transcoders☆613Updated last week
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆646Updated this week
- ☆335Updated 2 weeks ago
- Sparse Autoencoder for Mechanistic Interpretability☆260Updated last year
- ☆515Updated last year
- Using sparse coding to find distributed representations used by neural networks.☆265Updated last year
- Mechanistic Interpretability Visualizations using React☆282Updated 8 months ago
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆800Updated this week
- ☆238Updated 11 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆214Updated 8 months ago
- A library for mechanistic interpretability of GPT-style language models☆2,529Updated this week
- Tools for understanding how transformer predictions are built layer-by-layer☆521Updated 3 weeks ago
- ☆116Updated last month
- ☆165Updated 9 months ago
- ☆185Updated last month
- This repository collects all relevant resources about interpretability in LLMs☆370Updated 10 months ago
- ☆685Updated this week
- ☆126Updated last year
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆207Updated this week
- Representation Engineering: A Top-Down Approach to AI Transparency☆866Updated last year
- ☆191Updated 9 months ago
- ☆226Updated last year
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆225Updated 3 weeks ago
- ☆53Updated 9 months ago
- Steering Llama 2 with Contrastive Activation Addition☆178Updated last year
- ViT Prisma is a mechanistic interpretability library for Vision and Video Transformers (ViTs).☆301Updated last month
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆262Updated 2 months ago
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆146Updated this week
- A toolkit for describing model features and intervening on those features to steer behavior.☆198Updated 9 months ago
- Locating and editing factual associations in GPT (NeurIPS 2022)☆660Updated last year