EleutherAI / sparsifyLinks
Sparsify transformers with SAEs and transcoders
☆568Updated this week
Alternatives and similar repositories for sparsify
Users that are interested in sparsify are comparing it to the libraries listed below
Sorting:
- Training Sparse Autoencoders on Language Models☆837Updated this week
- Sparse Autoencoder for Mechanistic Interpretability☆250Updated 11 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆202Updated 6 months ago
- ☆490Updated 11 months ago
- ☆307Updated last month
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆593Updated this week
- Using sparse coding to find distributed representations used by neural networks.☆253Updated last year
- Mechanistic Interpretability Visualizations using React☆255Updated 6 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆184Updated last week
- ☆226Updated 8 months ago
- This repository collects all relevant resources about interpretability in LLMs☆358Updated 7 months ago
- ☆129Updated 7 months ago
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆754Updated 2 weeks ago
- ☆101Updated 2 weeks ago
- Tools for understanding how transformer predictions are built layer-by-layer☆500Updated last year
- ViT Prisma is a mechanistic interpretability library for Vision and Video Transformers (ViTs).☆272Updated last week
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆216Updated last year
- ☆173Updated 2 months ago
- ☆121Updated last year
- ☆119Updated 10 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆106Updated 4 months ago
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆231Updated last week
- ☆163Updated 7 months ago
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆121Updated this week
- Steering Llama 2 with Contrastive Activation Addition☆158Updated last year
- A toolkit for describing model features and intervening on those features to steer behavior.☆190Updated 7 months ago
- ☆134Updated 2 months ago
- Extract full next-token probabilities via language model APIs☆247Updated last year
- ☆212Updated last year
- Open source interpretability artefacts for R1.☆149Updated 2 months ago