mitvis / saliency-cardsLinks
Saliency Cards are transparency documentation for saliency methods. Learn about new saliency methods or document your own!
☆18Updated 2 years ago
Alternatives and similar repositories for saliency-cards
Users that are interested in saliency-cards are comparing it to the libraries listed below
Sorting:
- Erasing concepts from neural representations with provable guarantees☆243Updated last year
- NeuroSurgeon is a package that enables researchers to uncover and manipulate subnetworks within models in Huggingface Transformers☆42Updated 11 months ago
- ☆284Updated last year
- Experiments with representation engineering☆13Updated last year
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆217Updated 2 weeks ago
- ☆132Updated 2 years ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆140Updated 11 months ago
- Tools for understanding how transformer predictions are built layer-by-layer☆567Updated 6 months ago
- 🧠 Starter templates for doing interpretability research☆76Updated 2 years ago
- A collection of different ways to implement accessing and modifying internal model activations for LLMs☆21Updated last year
- ☆116Updated 11 months ago
- Repository for PURE: Turning Polysemantic Neurons Into Pure Features by Identifying Relevant Circuits, accepted at CVPR 2024 XAI4CV Works…☆19Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆207Updated last year
- A library for efficient patching and automatic circuit discovery.☆88Updated last month
- ☆261Updated 10 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆100Updated 2 years ago
- PAIR.withgoogle.com and friend's work on interpretability methods☆220Updated this week
- Code for "On Measuring Faithfulness of Natural Language Explanations"☆21Updated last year
- Inspecting and Editing Knowledge Representations in Language Models☆119Updated 2 years ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆240Updated last year
- ☆267Updated last year
- ☆112Updated 11 months ago
- Mechanistic Interpretability Visualizations using React☆320Updated last year
- we got you bro☆37Updated last year
- ☆28Updated 2 years ago
- ☆83Updated 11 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- ☆329Updated last year
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆72Updated 2 years ago
- Unified access to Large Language Model modules using NNsight☆87Updated last week