Aegis1863 / xJailbreakLinks
Code of paper: xJailbreak: Representation Space Guided Reinforcement Learning for Interpretable LLM Jailbreaking"
☆16Updated 10 months ago
Alternatives and similar repositories for xJailbreak
Users that are interested in xJailbreak are comparing it to the libraries listed below
Sorting:
- The code repo of paper "X-Boundary: Establishing Exact Safety Boundary to Shield LLMs from Multi-Turn Jailbreaks without Compromising Usa…☆37Updated last month
- ☆118Updated 11 months ago
- Official repository for the paper "Gradient-based Jailbreak Images for Multimodal Fusion Models" (https//arxiv.org/abs/2410.03489)☆19Updated last year
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆187Updated 6 months ago
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆66Updated last year
- ☆26Updated last year
- ☆68Updated 9 months ago
- [ICLR 2025] BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks☆30Updated 2 months ago
- The most comprehensive and accurate LLM jailbreak attack benchmark by far☆21Updated 9 months ago
- [arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"☆166Updated last year
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆109Updated last year
- [ACL24] Official Repo of Paper `ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs`☆90Updated 4 months ago
- ☆159Updated last year
- Awesome Jailbreak, red teaming arxiv papers (Automatically Update Every 12th hours)☆81Updated last week
- The official implementation of our NAACL 2024 paper "A Wolf in Sheep’s Clothing: Generalized Nested Jailbreak Prompts can Fool Large Lang…☆150Updated 4 months ago
- ☆55Updated last year
- A new algorithm that formulates jailbreaking as a reasoning problem.☆26Updated 6 months ago
- Code for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder" in NMI.☆55Updated 2 years ago
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆60Updated last year
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆103Updated 7 months ago
- Accepted by ECCV 2024☆179Updated last year
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆419Updated 11 months ago
- ☆190Updated 2 years ago
- ☆61Updated 7 months ago
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆337Updated last year
- Audio Jailbreak: An Open Comprehensive Benchmark for Jailbreaking Large Audio-Language Models☆27Updated 3 months ago
- A survey on harmful fine-tuning attack for large language model☆229Updated last week
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆506Updated 9 months ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆79Updated last week
- Accepted by IJCAI-24 Survey Track☆226Updated last year