jbloomAus / SAELens
Training Sparse Autoencoders on Language Models
☆761Updated this week
Alternatives and similar repositories for SAELens
Users that are interested in SAELens are comparing it to the libraries listed below
Sorting:
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆563Updated this week
- Sparsify transformers with SAEs and transcoders☆526Updated this week
- Sparse Autoencoder for Mechanistic Interpretability☆246Updated 9 months ago
- ☆290Updated this week
- Mechanistic Interpretability Visualizations using React☆245Updated 4 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆199Updated 4 months ago
- Using sparse coding to find distributed representations used by neural networks.☆242Updated last year
- ☆462Updated 9 months ago
- A library for mechanistic interpretability of GPT-style language models☆2,148Updated this week
- ☆223Updated 7 months ago
- Tools for understanding how transformer predictions are built layer-by-layer☆490Updated 11 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆172Updated this week
- This repository collects all relevant resources about interpretability in LLMs☆343Updated 6 months ago
- ☆167Updated last month
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆742Updated 2 weeks ago
- ☆529Updated this week
- ☆93Updated last month
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆211Updated last year
- ☆114Updated 9 months ago
- ☆121Updated last year
- ☆112Updated 5 months ago
- Steering Llama 2 with Contrastive Activation Addition☆151Updated 11 months ago
- Representation Engineering: A Top-Down Approach to AI Transparency☆829Updated 9 months ago
- ☆206Updated last year
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆217Updated 7 months ago
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆113Updated this week
- ☆265Updated last year
- A toolkit for describing model features and intervening on those features to steer behavior.☆182Updated 6 months ago
- ☆152Updated 5 months ago
- Locating and editing factual associations in GPT (NeurIPS 2022)☆633Updated last year