andyrdt / refusal_direction
Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".
☆123Updated last month
Related projects ⓘ
Alternatives and complementary repositories for refusal_direction
- Improving Alignment and Robustness with Circuit Breakers☆154Updated last month
- Steering vectors for transformer language models in Pytorch / Huggingface☆65Updated last month
- Steering Llama 2 with Contrastive Activation Addition☆97Updated 5 months ago
- A toolkit for describing model features and intervening on those features to steer behavior.☆99Updated last week
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆78Updated last year
- ☆105Updated last month
- Benchmarking LLMs with Challenging Tasks from Real Users☆195Updated 2 weeks ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆161Updated last month
- ☆107Updated this week
- ☆102Updated last month
- A simple unified framework for evaluating LLMs☆145Updated last week
- Röttger et al. (2023): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆63Updated 10 months ago
- Evaluating LLMs with fewer examples☆134Updated 7 months ago
- ☆90Updated 4 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆61Updated last week
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆144Updated last month
- Function Vectors in Large Language Models (ICLR 2024)☆119Updated last month
- Code accompanying "How I learned to start worrying about prompt formatting".☆95Updated last month
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆62Updated last year
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆84Updated 7 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆62Updated 5 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆91Updated 3 months ago
- AI Logging for Interpretability and Explainability🔬☆89Updated 5 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆157Updated last month
- ☆170Updated 8 months ago
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆44Updated last month
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆84Updated 8 months ago
- ☆71Updated 3 months ago
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆158Updated 4 months ago
- ☆101Updated 3 months ago