The official repository for guided jailbreak benchmark
☆28Jul 28, 2025Updated 7 months ago
Alternatives and similar repositories for AI-Safety_Benchmark
Users that are interested in AI-Safety_Benchmark are comparing it to the libraries listed below
Sorting:
- [USENIX'25] HateBench: Benchmarking Hate Speech Detectors on LLM-Generated Content and Hate Campaigns☆13Mar 1, 2025Updated last year
- A Unified Benchmark and Toolbox for Multimodal Jailbreak Attack–Defense Evaluation☆59Updated this week
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆35Oct 23, 2024Updated last year
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆318May 13, 2025Updated 9 months ago
- Test LLMs against jailbreaks and unprecedented harms☆40Oct 19, 2024Updated last year
- Accept by CVPR 2025 (highlight)☆22Jun 8, 2025Updated 8 months ago
- ☆21Oct 25, 2024Updated last year
- ☆56May 21, 2025Updated 9 months ago
- Code for the paper "Jailbreak Large Vision-Language Models Through Multi-Modal Linkage"☆27Dec 6, 2024Updated last year
- ☆18Mar 30, 2025Updated 11 months ago
- ☆23Jan 17, 2025Updated last year
- This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"☆47Oct 13, 2025Updated 4 months ago
- Revolve: Optimizing AI Systems by Tracking Response Evolution in Textual Optimization☆21Dec 13, 2024Updated last year
- Revisiting Character-level Adversarial Attacks for Language Models, ICML 2024☆19Feb 12, 2025Updated last year
- Improved techniques for optimization-based jailbreaking on large language models (ICLR2025)☆142Apr 7, 2025Updated 10 months ago
- official implementation of [USENIX Sec'25] StruQ: Defending Against Prompt Injection with Structured Queries☆63Nov 10, 2025Updated 3 months ago
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆23Jul 26, 2024Updated last year
- [ICLR 2025] BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks☆30Nov 2, 2025Updated 4 months ago
- 网络安全 LLM 智能体应用教程☆29Mar 2, 2025Updated last year
- ☆59Jun 5, 2024Updated last year
- [ICML 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".☆165May 2, 2025Updated 10 months ago
- 全球AI攻防挑战赛—赛道一:大模型生图安全疫苗注入第二名解题方案☆26Nov 7, 2024Updated last year
- NeurIPS'24 - LLM Safety Landscape☆39Oct 21, 2025Updated 4 months ago
- ☆121Feb 3, 2025Updated last year
- Do you want to learn AI Security but don't know where to start ? Take a look at this map.☆29Apr 23, 2024Updated last year
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆69Oct 23, 2024Updated last year
- ☆105Aug 11, 2025Updated 6 months ago
- ☆31Jan 26, 2025Updated last year
- [NeurIPS 2024] Accelerating Greedy Coordinate Gradient and General Prompt Optimization via Probe Sampling☆34Nov 8, 2024Updated last year
- ☆39May 17, 2025Updated 9 months ago
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆88May 9, 2025Updated 9 months ago
- This repository is a curated list of papers and open source code about competition for CV Adversarial Attack.☆27Jan 9, 2023Updated 3 years ago
- ☆33Jun 24, 2024Updated last year
- The official implementation of our NAACL 2024 paper "A Wolf in Sheep’s Clothing: Generalized Nested Jailbreak Prompts can Fool Large Lang…☆153Sep 2, 2025Updated 6 months ago
- ☆10Sep 28, 2020Updated 5 years ago
- Build an AI bot in Discord to serve user's personalized reports on what's up in tech☆28Sep 14, 2025Updated 5 months ago
- ☆164Sep 2, 2024Updated last year
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆377Jan 23, 2025Updated last year
- ☆12Jul 8, 2024Updated last year