The official repository for guided jailbreak benchmark
☆29Jul 28, 2025Updated 9 months ago
Alternatives and similar repositories for AI-Safety_Benchmark
Users that are interested in AI-Safety_Benchmark are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Code Implementation of Adversarial Prompt Evaluation paper☆14Sep 18, 2025Updated 7 months ago
- ☆22Oct 25, 2024Updated last year
- ☆129Feb 3, 2025Updated last year
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆66Aug 25, 2024Updated last year
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆330May 13, 2025Updated 11 months ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- [ICML 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".☆172May 2, 2025Updated last year
- ☆21Jul 26, 2025Updated 9 months ago
- [USENIX'25] HateBench: Benchmarking Hate Speech Detectors on LLM-Generated Content and Hate Campaigns☆14Mar 1, 2025Updated last year
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆35Oct 23, 2024Updated last year
- Test LLMs against jailbreaks and unprecedented harms☆39Oct 19, 2024Updated last year
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆112Oct 11, 2024Updated last year
- Accept by CVPR 2025 (highlight)☆25Jun 8, 2025Updated 10 months ago
- ☆25Jan 17, 2025Updated last year
- Code for the paper "Jailbreak Large Vision-Language Models Through Multi-Modal Linkage"☆33Dec 6, 2024Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- ☆18Mar 30, 2025Updated last year
- [ICLR 2026] The official code for "Doxing via the Lens: Revealing Location-related Privacy Leakage on Multi-modal Large Reasoning Models"☆26Feb 7, 2026Updated 2 months ago
- ☆20May 14, 2025Updated 11 months ago
- ☆65May 21, 2025Updated 11 months ago
- Panda Guard is designed for researching jailbreak attacks, defenses, and evaluation algorithms for large language models (LLMs).☆67Mar 23, 2026Updated last month
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆24Jul 26, 2024Updated last year
- official implementation of [USENIX Sec'25] StruQ: Defending Against Prompt Injection with Structured Queries☆71Nov 10, 2025Updated 5 months ago
- [AAAI2024] Official implementation of Evaluate Geometry of Radiance Fields with Low-frequency Color Prior☆17Jun 25, 2024Updated last year
- Revisiting Character-level Adversarial Attacks for Language Models, ICML 2024☆19Feb 12, 2025Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆33Jun 24, 2024Updated last year
- Revolve: Optimizing AI Systems by Tracking Response Evolution in Textual Optimization☆22Dec 13, 2024Updated last year
- DSN jailbreak Attack & Evaluation Ensemble☆17Feb 7, 2026Updated 2 months ago
- This repo is the artifact of FUEL☆15Apr 24, 2026Updated last week
- Code repository for the paper "Heuristic Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models"☆17Aug 7, 2025Updated 8 months ago
- Code for "When LLM Meets DRL: Advancing Jailbreaking Efficiency via DRL-guided Search" (NeurIPS 2024)☆18Oct 22, 2024Updated last year
- ☆131Dec 3, 2025Updated 5 months ago
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆191Apr 1, 2025Updated last year
- ☆60Jun 5, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- ☆69Jun 1, 2025Updated 11 months ago
- [NeurIPS 2021] "Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models" by Boxin Wang*, Chejian Xu*, Shuoh…☆13Apr 3, 2023Updated 3 years ago
- 网络安全 LLM 智能体应用教程☆29Mar 2, 2025Updated last year
- Prompt Generator model for Stable Diffusion Models☆12Jun 20, 2023Updated 2 years ago
- ☆136Jul 7, 2025Updated 9 months ago
- Code for our NeurIPS 2024 paper Improved Generation of Adversarial Examples Against Safety-aligned LLMs☆12Nov 7, 2024Updated last year
- Blogs that I'm actively following.☆15Sep 17, 2023Updated 2 years ago