Prisma-Multimodal / ViT-Prisma
ViT Prisma is a mechanistic interpretability library for Vision Transformers (ViTs).
☆218Updated this week
Alternatives and similar repositories for ViT-Prisma:
Users that are interested in ViT-Prisma are comparing it to the libraries listed below
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆194Updated 4 months ago
- Sparse Autoencoder for Mechanistic Interpretability☆239Updated 8 months ago
- ☆121Updated last year
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆166Updated this week
- ☆270Updated 2 months ago
- ☆91Updated 2 months ago
- Sparsify transformers with SAEs and transcoders☆511Updated last week
- Mechanistic Interpretability Visualizations using React☆239Updated 3 months ago
- ☆217Updated 6 months ago
- ☆83Updated this week
- Using sparse coding to find distributed representations used by neural networks.☆230Updated last year
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆209Updated last year
- WIP☆93Updated 8 months ago
- Official implementation of MAIA, A Multimodal Automated Interpretability Agent☆78Updated last month
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆539Updated last week
- ☆450Updated 8 months ago
- ☆36Updated 4 months ago
- 🧠 Starter templates for doing interpretability research☆70Updated last year
- Sparse Autoencoder Training Library☆47Updated 5 months ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆118Updated 2 years ago
- ☆157Updated last week
- Erasing concepts from neural representations with provable guarantees☆228Updated 2 months ago
- Editing Models with Task Arithmetic☆464Updated last year
- Tools for understanding how transformer predictions are built layer-by-layer☆485Updated 10 months ago
- Sparse and discrete interpretability tool for neural networks☆62Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆103Updated 4 months ago
- Training Sparse Autoencoders on Language Models☆724Updated this week
- 👋 Overcomplete is a Vision-based SAE Toolbox☆51Updated 3 weeks ago
- Code and weights for the paper "Cluster and Predict Latents Patches for Improved Masked Image Modeling"☆89Updated this week
- Open source replication of Anthropic's Crosscoders for Model Diffing☆49Updated 5 months ago