ndif-team / nnsight
The nnsight package enables interpreting and manipulating the internals of deep learned models.
☆490Updated this week
Alternatives and similar repositories for nnsight:
Users that are interested in nnsight are comparing it to the libraries listed below
- Mechanistic Interpretability Visualizations using React☆232Updated 2 months ago
- Training Sparse Autoencoders on Language Models☆619Updated this week
- ☆243Updated last week
- Sparse Autoencoder for Mechanistic Interpretability☆216Updated 7 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆182Updated 2 months ago
- ☆203Updated 4 months ago
- Using sparse coding to find distributed representations used by neural networks.☆213Updated last year
- Sparsify transformers with SAEs and transcoders☆461Updated this week
- ☆116Updated last year
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆208Updated last year
- ☆421Updated 7 months ago
- ☆151Updated this week
- ☆109Updated 6 months ago
- ☆456Updated this week
- ☆142Updated 3 weeks ago
- This repository collects all relevant resources about interpretability in LLMs☆321Updated 3 months ago
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆698Updated this week
- ☆262Updated 11 months ago
- Steering Llama 2 with Contrastive Activation Addition☆123Updated 8 months ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆112Updated 2 years ago
- A library for mechanistic interpretability of GPT-style language models☆1,868Updated this week
- Steering vectors for transformer language models in Pytorch / Huggingface☆88Updated this week
- Tools for understanding how transformer predictions are built layer-by-layer☆475Updated 8 months ago
- ☆190Updated 11 months ago
- Tools for studying developmental interpretability in neural networks.☆84Updated 3 weeks ago
- ☆55Updated 3 months ago
- ☆52Updated this week
- ViT Prisma is a mechanistic interpretability library for Vision Transformers (ViTs).☆204Updated this week
- Extract full next-token probabilities via language model APIs☆229Updated 11 months ago
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆92Updated this week