EleutherAI / sparsify
Sparsify transformers with SAEs and transcoders
☆515Updated this week
Alternatives and similar repositories for sparsify:
Users that are interested in sparsify are comparing it to the libraries listed below
- Training Sparse Autoencoders on Language Models☆724Updated this week
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆194Updated 4 months ago
- ☆274Updated 2 months ago
- Sparse Autoencoder for Mechanistic Interpretability☆239Updated 9 months ago
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆547Updated this week
- Mechanistic Interpretability Visualizations using React☆241Updated 4 months ago
- Using sparse coding to find distributed representations used by neural networks.☆233Updated last year
- ☆451Updated 9 months ago
- ☆217Updated 6 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆167Updated this week
- ☆121Updated last year
- ☆83Updated last week
- This repository collects all relevant resources about interpretability in LLMs☆339Updated 5 months ago
- ☆159Updated last week
- Tools for understanding how transformer predictions are built layer-by-layer☆485Updated 10 months ago
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆732Updated last week
- ☆114Updated 8 months ago
- ☆96Updated 5 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆94Updated last month
- Extract full next-token probabilities via language model APIs☆241Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆143Updated 10 months ago
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆208Updated 6 months ago
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆108Updated last week
- A toolkit for describing model features and intervening on those features to steer behavior.☆176Updated 5 months ago
- ViT Prisma is a mechanistic interpretability library for Vision Transformers (ViTs).☆218Updated this week
- ☆264Updated last year
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆209Updated last year
- ☆36Updated 5 months ago
- ☆202Updated last year
- Erasing concepts from neural representations with provable guarantees☆227Updated 2 months ago