Prisma-Multimodal / ViT-PrismaLinks
ViT Prisma is a mechanistic interpretability library for Vision Transformers (ViTs).
☆246Updated last week
Alternatives and similar repositories for ViT-Prisma
Users that are interested in ViT-Prisma are comparing it to the libraries listed below
Sorting:
- ☆302Updated 2 weeks ago
- Sparsify transformers with SAEs and transcoders☆547Updated this week
- Sparse Autoencoder for Mechanistic Interpretability☆248Updated 10 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆200Updated 5 months ago
- ☆121Updated last year
- Using sparse coding to find distributed representations used by neural networks.☆247Updated last year
- ☆222Updated 8 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆180Updated this week
- Mechanistic Interpretability Visualizations using React☆251Updated 5 months ago
- ☆96Updated last month
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆126Updated 3 weeks ago
- ☆93Updated 3 months ago
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆574Updated this week
- ☆42Updated 6 months ago
- ☆480Updated 10 months ago
- ☆170Updated last month
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆125Updated 2 years ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆213Updated last year
- WIP☆93Updated 9 months ago
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆237Updated 3 months ago
- Decoder only transformer, built from scratch with PyTorch☆30Updated last year
- ☆120Updated 6 months ago
- Official implementation of MAIA, A Multimodal Automated Interpretability Agent☆80Updated 2 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆189Updated last year
- Training Sparse Autoencoders on Language Models☆789Updated this week
- Tools for understanding how transformer predictions are built layer-by-layer☆493Updated 11 months ago
- 🧠 Starter templates for doing interpretability research☆70Updated last year
- ☆116Updated 9 months ago
- This repository collects all relevant resources about interpretability in LLMs☆353Updated 7 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆127Updated last year