jbloomAus / SAELens
Training Sparse Autoencoders on Language Models
☆637Updated this week
Alternatives and similar repositories for SAELens:
Users that are interested in SAELens are comparing it to the libraries listed below
- Sparsify transformers with SAEs and transcoders☆476Updated this week
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆503Updated this week
- Mechanistic Interpretability Visualizations using React☆235Updated 2 months ago
- Sparse Autoencoder for Mechanistic Interpretability☆217Updated 7 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆183Updated 2 months ago
- ☆429Updated 7 months ago
- ☆246Updated 2 weeks ago
- Using sparse coding to find distributed representations used by neural networks.☆217Updated last year
- ☆207Updated 5 months ago
- Tools for understanding how transformer predictions are built layer-by-layer☆477Updated 9 months ago
- ☆153Updated this week
- This repository collects all relevant resources about interpretability in LLMs☆322Updated 4 months ago
- ☆120Updated last year
- ☆144Updated this week
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆706Updated last week
- ☆110Updated 6 months ago
- Extract full next-token probabilities via language model APIs☆231Updated last year
- ☆468Updated this week
- A toolkit for describing model features and intervening on those features to steer behavior.☆160Updated 3 months ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆207Updated last year
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆94Updated this week
- Steering Llama 2 with Contrastive Activation Addition☆124Updated 9 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆88Updated last week
- A library for mechanistic interpretability of GPT-style language models☆1,901Updated this week
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆186Updated 5 months ago
- ☆191Updated last year
- Representation Engineering: A Top-Down Approach to AI Transparency☆795Updated 6 months ago
- Improving Alignment and Robustness with Circuit Breakers☆187Updated 5 months ago
- ☆262Updated last year
- ☆58Updated this week