☆124Feb 3, 2025Updated last year
Alternatives and similar repositories for ActorAttack
Users that are interested in ActorAttack are comparing it to the libraries listed below
Sorting:
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆59Oct 1, 2025Updated 5 months ago
- ☆19May 14, 2025Updated 10 months ago
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM☆39Jan 17, 2025Updated last year
- ☆122Dec 3, 2025Updated 3 months ago
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆66Aug 25, 2024Updated last year
- Red Queen Dataset and data generation template☆27Dec 26, 2025Updated 2 months ago
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆22May 6, 2025Updated 10 months ago
- Welcome to the official repository for Siren, a project aimed at understanding and mitigating harmful behaviors in large language models …☆15Sep 12, 2025Updated 6 months ago
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Jul 9, 2024Updated last year
- [ACL 2025] Data and Code for Paper VLSBench: Unveiling Visual Leakage in Multimodal Safety☆57Jul 21, 2025Updated 7 months ago
- Diagnostic Framework for LLMs and MLLMs☆34Mar 2, 2026Updated 2 weeks ago
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆546Apr 4, 2025Updated 11 months ago
- [NeurIPS 2024] Fight Back Against Jailbreaking via Prompt Adversarial Tuning☆11Oct 29, 2024Updated last year
- The official repository for guided jailbreak benchmark☆29Jul 28, 2025Updated 7 months ago
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆113Oct 11, 2024Updated last year
- [EMNLP 2025] The code repo of paper "X-Boundary: Establishing Exact Safety Boundary to Shield LLMs from Multi-Turn Jailbreaks without Com…☆40Nov 24, 2025Updated 3 months ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆380Jan 23, 2025Updated last year
- ☆59May 21, 2025Updated 9 months ago
- Official implementation of AdvPrompter https//arxiv.org/abs/2404.16873☆179May 6, 2024Updated last year
- [ICML 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".☆168May 2, 2025Updated 10 months ago
- ☆704Jul 2, 2025Updated 8 months ago
- Panda Guard is designed for researching jailbreak attacks, defenses, and evaluation algorithms for large language models (LLMs).☆66Jan 19, 2026Updated 2 months ago
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆162Nov 30, 2024Updated last year
- ☆25Jun 17, 2025Updated 9 months ago
- ☆21Jul 26, 2025Updated 7 months ago
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆879Aug 16, 2024Updated last year
- ☆11Oct 25, 2024Updated last year
- ☆23Jan 17, 2025Updated last year
- ☆18Apr 7, 2025Updated 11 months ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆107May 20, 2025Updated 10 months ago
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆826Mar 27, 2025Updated 11 months ago
- ☆26Mar 17, 2025Updated last year
- Implementation of our ICLR 2021 paper: Policy-Driven Attack: Learning to Query for Hard-label Black-box Adversarial Examples.☆11Mar 9, 2021Updated 5 years ago
- ☆163Sep 2, 2024Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆259Sep 24, 2024Updated last year
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆321May 13, 2025Updated 10 months ago
- [NeurIPS 2025] Official repository of RiOSWorld: Benchmarking the Risk of Multimodal Computer-Use Agents☆117Dec 2, 2025Updated 3 months ago
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆20Mar 25, 2024Updated last year
- ShieldLM: Empowering LLMs as Aligned, Customizable and Explainable Safety Detectors [EMNLP 2024 Findings]☆226Sep 29, 2024Updated last year