Aegis1863 / xJailbreakLinks
Code of paper: xJailbreak: Representation Space Guided Reinforcement Learning for Interpretable LLM Jailbreaking"
☆13Updated 5 months ago
Alternatives and similar repositories for xJailbreak
Users that are interested in xJailbreak are comparing it to the libraries listed below
Sorting:
- The code repo of paper "X-Boundary: Establishing Exact Safety Boundary to Shield LLMs from Multi-Turn Jailbreaks without Compromising Usa…☆35Updated 5 months ago
- The most comprehensive and accurate LLM jailbreak attack benchmark by far☆21Updated 5 months ago
- ☆23Updated last year
- Official repository for the paper "Gradient-based Jailbreak Images for Multimodal Fusion Models" (https//arxiv.org/abs/2410.03489)☆19Updated 10 months ago
- [ICLR 2025] BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks☆21Updated 4 months ago
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆167Updated 2 months ago
- ☆60Updated 5 months ago
- Code for ICLR 2025 Failures to Find Transferable Image Jailbreaks Between Vision-Language Models☆31Updated 3 months ago
- Audio Jailbreak: An Open Comprehensive Benchmark for Jailbreaking Large Audio-Language Models☆18Updated 3 months ago
- A new algorithm that formulates jailbreaking as a reasoning problem.☆23Updated 2 months ago
- ☆101Updated 7 months ago
- ☆35Updated 11 months ago
- [ACL24] Official Repo of Paper `ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs`☆80Updated 2 weeks ago
- ☆60Updated 3 months ago
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆58Updated last year
- ☆20Updated last year
- Awesome Jailbreak, red teaming arxiv papers (Automatically Update Every 12th hours)☆53Updated last week
- [arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"☆161Updated last year
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆106Updated last year
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆184Updated 6 months ago
- The official implementation of our NAACL 2024 paper "A Wolf in Sheep’s Clothing: Generalized Nested Jailbreak Prompts can Fool Large Lang…☆128Updated this week
- ☆30Updated 3 months ago
- ☆147Updated last year
- A Survey on Jailbreak Attacks and Defenses against Multimodal Generative Models☆216Updated 3 weeks ago
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆55Updated last year
- Accepted by IJCAI-24 Survey Track☆212Updated last year
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆73Updated 3 months ago
- ☆19Updated last week
- Accepted by ECCV 2024☆149Updated 10 months ago
- ☆31Updated 4 months ago