nrimsky / CAA
Steering Llama 2 with Contrastive Activation Addition
☆124Updated 9 months ago
Alternatives and similar repositories for CAA:
Users that are interested in CAA are comparing it to the libraries listed below
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆89Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingface☆88Updated last week
- Algebraic value editing in pretrained language models☆62Updated last year
- A library for efficient patching and automatic circuit discovery.☆54Updated 2 weeks ago
- ☆144Updated this week
- ☆78Updated 6 months ago
- ☆207Updated 5 months ago
- ☆110Updated 6 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆64Updated 3 months ago
- ☆153Updated this week
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆94Updated this week
- Function Vectors in Large Language Models (ICLR 2024)☆140Updated 4 months ago
- Using sparse coding to find distributed representations used by neural networks.☆217Updated last year
- ☆89Updated last year
- Sparse probing paper full code.☆53Updated last year
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆183Updated 5 months ago
- A resource repository for representation engineering in large language models☆107Updated 3 months ago
- ☆120Updated last year
- ☆190Updated last year
- General-purpose activation steering library☆45Updated last month
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆29Updated 9 months ago
- ☆58Updated this week
- ☆57Updated 3 months ago
- ☆88Updated 2 weeks ago
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆71Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆84Updated last week
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆65Updated 8 months ago
- Improving Alignment and Robustness with Circuit Breakers☆187Updated 5 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆68Updated 11 months ago
- AI Logging for Interpretability and Explainability🔬☆105Updated 8 months ago