ndif-team / nnsight
The nnsight package enables interpreting and manipulating the internals of deep learned models.
☆458Updated this week
Alternatives and similar repositories for nnsight:
Users that are interested in nnsight are comparing it to the libraries listed below
- Mechanistic Interpretability Visualizations using React☆219Updated 3 weeks ago
- Training Sparse Autoencoders on Language Models☆573Updated this week
- ☆201Updated 3 months ago
- Sparse Autoencoder for Mechanistic Interpretability☆209Updated 5 months ago
- Using sparse coding to find distributed representations used by neural networks.☆207Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆176Updated last month
- ☆179Updated this week
- Sparse autoencoders☆407Updated this week
- ☆114Updated last year
- ☆404Updated 5 months ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆206Updated 11 months ago
- ☆412Updated this week
- ☆131Updated 3 months ago
- ☆106Updated 5 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆78Updated last month
- ☆135Updated this week
- Steering Llama 2 with Contrastive Activation Addition☆113Updated 7 months ago
- Tools for understanding how transformer predictions are built layer-by-layer☆459Updated 7 months ago
- This repository collects all relevant resources about interpretability in LLMs☆305Updated 2 months ago
- ☆184Updated 10 months ago
- Stanford NLP Python Library for Understanding and Improving PyTorch Models via Interventions☆677Updated 2 weeks ago
- ☆258Updated 10 months ago
- ☆53Updated 2 months ago
- METR Task Standard☆135Updated 2 weeks ago
- A library for mechanistic interpretability of GPT-style language models☆1,751Updated this week
- Decoder only transformer, built from scratch with PyTorch☆26Updated last year
- Tools for studying developmental interpretability in neural networks.☆82Updated 3 weeks ago
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆82Updated this week
- Improving Steering Vectors by Targeting Sparse Autoencoder Features☆13Updated last month
- ViT Prisma is a mechanistic interpretability library for Vision Transformers (ViTs).☆194Updated this week