Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM
☆39Jan 17, 2025Updated last year
Alternatives and similar repositories for CoA
Users that are interested in CoA are comparing it to the libraries listed below
Sorting:
- ☆124Feb 3, 2025Updated last year
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆66Aug 25, 2024Updated last year
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆22May 6, 2025Updated 10 months ago
- Red Queen Dataset and data generation template☆27Dec 26, 2025Updated 2 months ago
- ☆25Jan 17, 2025Updated last year
- ☆39May 17, 2025Updated 10 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆36Oct 23, 2024Updated last year
- Accept by CVPR 2025 (highlight)☆24Jun 8, 2025Updated 9 months ago
- AmpleGCG: Learning a Universal and Transferable Generator of Adversarial Attacks on Both Open and Closed LLM☆85Nov 3, 2024Updated last year
- [COLING 2025] Official repo of paper: "Not Aligned" is Not "Malicious": Being Careful about Hallucinations of Large Language Models' Jail…☆12Jul 26, 2024Updated last year
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆113Oct 11, 2024Updated last year
- The most comprehensive and accurate LLM jailbreak attack benchmark by far☆22Mar 22, 2025Updated last year
- ☆11Nov 12, 2024Updated last year
- The repo for paper: Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models.☆14Dec 16, 2024Updated last year
- ☆197Apr 7, 2025Updated 11 months ago
- Code repository for the paper "Heuristic Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models"☆15Aug 7, 2025Updated 7 months ago
- Official Implementation of implicit reference attack☆11Oct 16, 2024Updated last year
- ☆19May 14, 2025Updated 10 months ago
- [USENIX'25] HateBench: Benchmarking Hate Speech Detectors on LLM-Generated Content and Hate Campaigns☆13Mar 1, 2025Updated last year
- code of paper "Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM"☆14Nov 17, 2023Updated 2 years ago
- The official implementation of our NAACL 2024 paper "A Wolf in Sheep’s Clothing: Generalized Nested Jailbreak Prompts can Fool Large Lang…☆155Sep 2, 2025Updated 6 months ago
- ☆26Apr 18, 2025Updated 11 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98May 23, 2024Updated last year
- TAP: An automated jailbreaking method for black-box LLMs☆224Dec 10, 2024Updated last year
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆70Oct 23, 2024Updated last year
- [EMNLP 2025] Reasoning-to-Defend: Safety-Aware Reasoning Can Defend Large Language Models from Jailbreaking☆12Aug 22, 2025Updated 7 months ago
- ☆60Jun 5, 2024Updated last year
- ☆704Jul 2, 2025Updated 8 months ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Jan 11, 2025Updated last year
- Q&A dataset for many-shot jailbreaking☆14Jul 19, 2024Updated last year
- Codes for our paper "AgentMonitor: A Plug-and-Play Framework for Predictive and Secure Multi-Agent Systems"☆13Dec 13, 2024Updated last year
- ☆28Mar 20, 2024Updated 2 years ago
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆20Mar 25, 2024Updated last year
- This is the official Gtihub repo for our paper: "BEEAR: Embedding-based Adversarial Removal of Safety Backdoors in Instruction-tuned Lang…☆22Jul 3, 2024Updated last year
- This is the official code repository for paper "Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model Quantizati…☆14Sep 21, 2025Updated 6 months ago
- ☆122Dec 3, 2025Updated 3 months ago
- The official implementation of the paper "Large Scale Knowledge Washing"☆10Jun 12, 2024Updated last year
- On the Robustness of GUI Grounding Models Against Image Attacks☆12Apr 8, 2025Updated 11 months ago
- Source code for Truth-Aware Context Selection: Mitigating the Hallucinations of Large Language Models Being Misled by Untruthful Contexts☆17Sep 2, 2024Updated last year