The official repository for guided jailbreak benchmark
☆29Jul 28, 2025Updated 7 months ago
Alternatives and similar repositories for AI-Safety_Benchmark
Users that are interested in AI-Safety_Benchmark are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Code Implementation of Adversarial Prompt Evaluation paper☆14Sep 18, 2025Updated 6 months ago
- A Unified Benchmark and Toolbox for Multimodal Jailbreak Attack–Defense Evaluation☆63Mar 2, 2026Updated 3 weeks ago
- ☆22Oct 25, 2024Updated last year
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆66Aug 25, 2024Updated last year
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆322May 13, 2025Updated 10 months ago
- [ICML 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".☆168May 2, 2025Updated 10 months ago
- ☆21Jul 26, 2025Updated 7 months ago
- [USENIX'25] HateBench: Benchmarking Hate Speech Detectors on LLM-Generated Content and Hate Campaigns☆13Mar 1, 2025Updated last year
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆35Oct 23, 2024Updated last year
- ☆124Feb 3, 2025Updated last year
- Improved techniques for optimization-based jailbreaking on large language models (ICLR2025)☆141Apr 7, 2025Updated 11 months ago
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆113Oct 11, 2024Updated last year
- Test LLMs against jailbreaks and unprecedented harms☆40Oct 19, 2024Updated last year
- Accept by CVPR 2025 (highlight)☆24Jun 8, 2025Updated 9 months ago
- This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"☆47Oct 13, 2025Updated 5 months ago
- ☆25Jan 17, 2025Updated last year
- ☆18Mar 30, 2025Updated 11 months ago
- ☆19May 14, 2025Updated 10 months ago
- Panda Guard is designed for researching jailbreak attacks, defenses, and evaluation algorithms for large language models (LLMs).☆66Jan 19, 2026Updated 2 months ago
- official implementation of [USENIX Sec'25] StruQ: Defending Against Prompt Injection with Structured Queries☆65Nov 10, 2025Updated 4 months ago
- [AAAI2024] Official implementation of Evaluate Geometry of Radiance Fields with Low-frequency Color Prior☆17Jun 25, 2024Updated last year
- Revisiting Character-level Adversarial Attacks for Language Models, ICML 2024☆19Feb 12, 2025Updated last year
- ☆33Jun 24, 2024Updated last year
- [ICLR 2025] BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks☆31Nov 2, 2025Updated 4 months ago
- Code repository for the paper "Heuristic Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models"☆15Aug 7, 2025Updated 7 months ago
- Do you want to learn AI Security but don't know where to start ? Take a look at this map.☆31Apr 23, 2024Updated last year
- DSN jailbreak Attack & Evaluation Ensemble☆17Feb 7, 2026Updated last month
- Revolve: Optimizing AI Systems by Tracking Response Evolution in Textual Optimization☆22Dec 13, 2024Updated last year
- Code for "When LLM Meets DRL: Advancing Jailbreaking Efficiency via DRL-guided Search" (NeurIPS 2024)☆18Oct 22, 2024Updated last year
- ☆124Dec 3, 2025Updated 3 months ago
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆188Apr 1, 2025Updated 11 months ago
- ☆60Jun 5, 2024Updated last year
- [NeurIPS 2024] Accelerating Greedy Coordinate Gradient and General Prompt Optimization via Probe Sampling☆35Nov 8, 2024Updated last year
- [NeurIPS 2021] "Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models" by Boxin Wang*, Chejian Xu*, Shuoh…☆13Apr 3, 2023Updated 2 years ago
- 网络安全 LLM 智能体应用教程☆29Mar 2, 2025Updated last year
- Prompt Generator model for Stable Diffusion Models☆12Jun 20, 2023Updated 2 years ago
- ☆131Jul 7, 2025Updated 8 months ago
- Code for our NeurIPS 2024 paper Improved Generation of Adversarial Examples Against Safety-aligned LLMs☆12Nov 7, 2024Updated last year
- All-in-one benchmarking platform for evaluating LLM.☆15Nov 12, 2025Updated 4 months ago