Prisma-Multimodal / ViT-PrismaLinks
ViT Prisma is a mechanistic interpretability library for Vision and Video Transformers (ViTs).
☆306Updated last month
Alternatives and similar repositories for ViT-Prisma
Users that are interested in ViT-Prisma are comparing it to the libraries listed below
Sorting:
- ☆341Updated 3 weeks ago
- Sparsify transformers with SAEs and transcoders☆613Updated last week
- Sparse Autoencoder for Mechanistic Interpretability☆262Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆219Updated 8 months ago
- ☆240Updated 11 months ago
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆655Updated last week
- Mechanistic Interpretability Visualizations using React☆285Updated 8 months ago
- ☆127Updated last year
- ☆517Updated last year
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆208Updated last week
- Reliable, minimal and scalable library for pretraining foundation and world models☆58Updated this week
- Using sparse coding to find distributed representations used by neural networks.☆267Updated last year
- ☆121Updated last month
- ☆603Updated 5 months ago
- ☆106Updated 7 months ago
- ☆54Updated 9 months ago
- 🧠 Starter templates for doing interpretability research☆73Updated 2 years ago
- ☆185Updated last month
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆225Updated last month
- Tools for understanding how transformer predictions are built layer-by-layer☆524Updated last month
- Training Sparse Autoencoders on Language Models☆958Updated this week
- Editing Models with Task Arithmetic☆498Updated last year
- ☆166Updated 9 months ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆129Updated 3 years ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆157Updated 2 months ago
- ☆53Updated 9 months ago
- Bootstrapping ARC☆143Updated 9 months ago
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆291Updated last month
- Attribution-based Parameter Decomposition☆30Updated 3 months ago
- Open-source framework for the research and development of foundation models.☆419Updated this week