Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".
☆355Jun 13, 2025Updated 8 months ago
Alternatives and similar repositories for refusal_direction
Users that are interested in refusal_direction are comparing it to the libraries listed below
Sorting:
- ☆18Mar 30, 2025Updated 11 months ago
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆319May 13, 2025Updated 9 months ago
- Steering Llama 2 with Contrastive Activation Addition☆213May 23, 2024Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆258Sep 24, 2024Updated last year
- ☆23Jun 13, 2024Updated last year
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆174Apr 23, 2025Updated 10 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆151Jul 19, 2024Updated last year
- Fluent student-teacher redteaming☆23Jul 25, 2024Updated last year
- [ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆67Jun 9, 2025Updated 8 months ago
- Sparse Autoencoder Training Library☆55May 1, 2025Updated 10 months ago
- Training Sparse Autoencoders on Language Models☆1,233Feb 27, 2026Updated last week
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆23Jul 26, 2024Updated last year
- A library for mechanistic interpretability of GPT-style language models☆3,133Updated this week
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆244Updated this week
- ☆36Apr 30, 2024Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆89Mar 30, 2025Updated 11 months ago
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆836Updated this week
- ☆27Oct 6, 2024Updated last year
- A resource repository for representation engineering in large language models☆148Nov 14, 2024Updated last year
- A tiny easily hackable implementation of a feature dashboard.☆15Oct 21, 2025Updated 4 months ago
- Code repo for the model organisms and convergent directions of EM papers.☆53Sep 22, 2025Updated 5 months ago
- ☆209Oct 14, 2025Updated 4 months ago
- Mechanistic Interpretability Visualizations using React☆328Dec 18, 2024Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆247Feb 27, 2026Updated last week
- ☆399Aug 21, 2025Updated 6 months ago
- [ICLR 2025] General-purpose activation steering library☆145Sep 18, 2025Updated 5 months ago
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆90May 2, 2025Updated 10 months ago
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆864Aug 16, 2024Updated last year
- Tools for understanding how transformer predictions are built layer-by-layer☆570Aug 7, 2025Updated 6 months ago
- ☆48Sep 29, 2024Updated last year
- ☆117Feb 11, 2025Updated last year
- [ICLR 2025] BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks☆30Nov 2, 2025Updated 4 months ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆379Jan 23, 2025Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆168Feb 22, 2026Updated last week
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Jan 11, 2025Updated last year
- Sparsify transformers with SAEs and transcoders☆699Updated this week
- A library for efficient patching and automatic circuit discovery.☆90Dec 31, 2025Updated 2 months ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆107May 20, 2025Updated 9 months ago
- ☆18Feb 25, 2026Updated last week