neelnanda-io / 1L-Sparse-AutoencoderLinks
☆132Updated 2 years ago
Alternatives and similar repositories for 1L-Sparse-Autoencoder
Users that are interested in 1L-Sparse-Autoencoder are comparing it to the libraries listed below
Sorting:
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆228Updated 11 months ago
- ☆255Updated last year
- Mechanistic Interpretability Visualizations using React☆302Updated 11 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆228Updated this week
- ☆367Updated 3 months ago
- Using sparse coding to find distributed representations used by neural networks.☆286Updated 2 years ago
- ☆136Updated 2 weeks ago
- Sparse Autoencoder for Mechanistic Interpretability☆284Updated last year
- ☆189Updated last year
- ☆196Updated last month
- ☆130Updated last year
- ☆57Updated last year
- Sparse Autoencoder Training Library☆55Updated 7 months ago
- Open source replication of Anthropic's Crosscoders for Model Diffing☆62Updated last year
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆130Updated 3 years ago
- A library for efficient patching and automatic circuit discovery.☆80Updated 4 months ago
- Attribution-based Parameter Decomposition☆32Updated 5 months ago
- Sparsify transformers with SAEs and transcoders☆665Updated this week
- Sparse probing paper full code.☆65Updated last year
- ☆283Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆195Updated last year
- ☆29Updated last year
- ☆110Updated 9 months ago
- ☆83Updated 9 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆130Updated 9 months ago
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆42Updated last year
- Erasing concepts from neural representations with provable guarantees☆239Updated 10 months ago
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆707Updated this week
- Tools for understanding how transformer predictions are built layer-by-layer☆549Updated 3 months ago
- ☆36Updated last year