☆23Jun 13, 2024Updated last year
Alternatives and similar repositories for jailbreak_dynamics
Users that are interested in jailbreak_dynamics are comparing it to the libraries listed below
Sorting:
- ☆18Mar 30, 2025Updated 11 months ago
- ☆58Jun 13, 2024Updated last year
- Q&A dataset for many-shot jailbreaking☆14Jul 19, 2024Updated last year
- ☆13Feb 24, 2025Updated last year
- A repo for LLM jailbreak☆14Sep 5, 2023Updated 2 years ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆85Mar 7, 2025Updated last year
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆355Jun 13, 2025Updated 8 months ago
- ECSO (Make MLLM safe without neither training nor any external models!) (https://arxiv.org/abs/2403.09572)☆35Nov 2, 2024Updated last year
- ☆33Jun 24, 2024Updated last year
- ☆19Mar 5, 2024Updated 2 years ago
- [ICML 24] A novel automated neuron explanation framework that can accurately describe poly-semantic concepts in deep neural networks☆14May 2, 2025Updated 10 months ago
- TACL 2025: Investigating Adversarial Trigger Transfer in Large Language Models☆19Aug 17, 2025Updated 6 months ago
- ☆32Feb 15, 2026Updated 2 weeks ago
- [ICLR 2025] Official codebase for the ICLR 2025 paper "Multimodal Situational Safety"☆30Jun 23, 2025Updated 8 months ago
- A resource repository for representation engineering in large language models☆148Nov 14, 2024Updated last year
- ☆25Apr 23, 2024Updated last year
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆23Jul 26, 2024Updated last year
- [ICLR 2025] BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks☆30Nov 2, 2025Updated 4 months ago
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆174Apr 23, 2025Updated 10 months ago
- ☆24Jan 28, 2025Updated last year
- Code for ICLR 2025 Failures to Find Transferable Image Jailbreaks Between Vision-Language Models☆37Jun 1, 2025Updated 9 months ago
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆61Sep 11, 2025Updated 5 months ago
- This is the official repository for the "Towards Vision-Language Mechanistic Interpretability: A Causal Tracing Tool for BLIP" paper acce…☆25Feb 16, 2026Updated 2 weeks ago
- Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization☆42Jul 28, 2024Updated last year
- Benchmark evaluation code for "SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal" (ICLR 2025)☆76Mar 1, 2025Updated last year
- A collection of homebrew formula for the different thinking hacker☆36Dec 25, 2023Updated 2 years ago
- ☆273Oct 1, 2024Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆213May 23, 2024Updated last year
- Sparse probing paper full code.☆67Dec 17, 2023Updated 2 years ago
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆319May 13, 2025Updated 9 months ago
- A library of visualization tools for the interpretability and hallucination analysis of large vision-language models (LVLMs).☆41May 22, 2025Updated 9 months ago
- Irolyn is a jailbreak repo extractor for iOS 18 to iOS 18.5 and iPadOS 18 to iPadOS 18.5 .☆12May 15, 2025Updated 9 months ago
- FeatureAlignment = Alignment + Mechanistic Interpretability☆34Mar 8, 2025Updated 11 months ago
- The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?☆42Nov 1, 2024Updated last year
- [CVPR 2025] Official implementation for "Steering Away from Harm: An Adaptive Approach to Defending Vision Language Model Against Jailbre…☆52Jul 5, 2025Updated 8 months ago
- [EMNLP 2025] The code repo of paper "X-Boundary: Establishing Exact Safety Boundary to Shield LLMs from Multi-Turn Jailbreaks without Com…☆39Nov 24, 2025Updated 3 months ago
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆47May 31, 2024Updated last year
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆535Apr 4, 2025Updated 11 months ago
- The official repository for paper "MLLM-Protector: Ensuring MLLM’s Safety without Hurting Performance"☆44Apr 21, 2024Updated last year