Prisma-Multimodal / ViT-Prisma
ViT Prisma is a mechanistic interpretability library for Vision Transformers (ViTs).
☆231Updated this week
Alternatives and similar repositories for ViT-Prisma:
Users that are interested in ViT-Prisma are comparing it to the libraries listed below
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆199Updated 4 months ago
- ☆280Updated 2 months ago
- Sparse Autoencoder for Mechanistic Interpretability☆243Updated 9 months ago
- Mechanistic Interpretability Visualizations using React☆242Updated 4 months ago
- ☆121Updated last year
- Using sparse coding to find distributed representations used by neural networks.☆240Updated last year
- Sparsify transformers with SAEs and transcoders☆524Updated this week
- ☆93Updated 3 weeks ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆170Updated last week
- ☆223Updated 7 months ago
- ☆92Updated 2 months ago
- WIP☆93Updated 8 months ago
- Tools for understanding how transformer predictions are built layer-by-layer☆490Updated 11 months ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆211Updated last year
- Official implementation of MAIA, A Multimodal Automated Interpretability Agent☆80Updated 2 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆105Updated this week
- Code and weights for the paper "Cluster and Predict Latents Patches for Improved Masked Image Modeling"☆101Updated 3 weeks ago
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆555Updated this week
- ☆458Updated 9 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆189Updated 11 months ago
- ☆40Updated 5 months ago
- ☆111Updated 5 months ago
- 🧠 Starter templates for doing interpretability research☆70Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆123Updated last year
- https://transformer-circuits.pub/2025/attribution-graphs/methods.html☆43Updated last month
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆120Updated 2 years ago
- ☆265Updated last year
- Understand and test language model architectures on synthetic tasks.☆195Updated 2 months ago
- ☆528Updated 3 weeks ago
- ☆217Updated 9 months ago