A Unified Benchmark and Toolbox for Multimodal Jailbreak Attack–Defense Evaluation
☆68Mar 2, 2026Updated 2 months ago
Alternatives and similar repositories for OmniSafeBench-MM
Users that are interested in OmniSafeBench-MM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The Oyster series is a set of safety models developed in-house by Alibaba-AAIG, devoted to building a responsible AI ecosystem. | Oyster …☆62Apr 29, 2026Updated last week
- Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment (NeurIPS 2025)☆64Nov 5, 2025Updated 6 months ago
- Code for Fast Propagation is Better: Accelerating Single-Step Adversarial Training via Sampling Subnetworks (TIFS2024)☆13Mar 29, 2024Updated 2 years ago
- Emoji Attack [ICML 2025]☆42Jul 15, 2025Updated 9 months ago
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆24Jul 26, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆11Dec 18, 2024Updated last year
- Improving fast adversarial training with prior-guided knowledge (TPAMI2024)☆43Apr 21, 2024Updated 2 years ago
- Prompt Generator model for Stable Diffusion Models☆12Jun 20, 2023Updated 2 years ago
- ☆22Oct 25, 2024Updated last year
- ☆14Feb 26, 2025Updated last year
- The first toolkit for MLRM safety evaluation, providing unified interface for mainstream models, datasets, and jailbreaking methods!☆15Apr 8, 2025Updated last year
- Code repository for the paper "Heuristic Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models"☆17Aug 7, 2025Updated 9 months ago
- Pytorch implementation of NPAttack☆12Jul 7, 2020Updated 5 years ago
- Code for Boosting fast adversarial training with learnable adversarial initialization (TIP2022)☆29Aug 22, 2023Updated 2 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- An implementation of the penalty-based bilevel gradient descent (PBGD) algorithm and the iterative differentiation (ITD/RHG) methods.☆19Feb 13, 2023Updated 3 years ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Jan 3, 2023Updated 3 years ago
- Code for "When LLM Meets DRL: Advancing Jailbreaking Efficiency via DRL-guided Search" (NeurIPS 2024)☆18Oct 22, 2024Updated last year
- About Official PyTorch implementation of "Query-Efficient Black-Box Red Teaming via Bayesian Optimization" (ACL'23)☆15Jul 9, 2023Updated 2 years ago
- this is for the ACM MM paper---Backdoor Attack on Crowd Counting☆17Jul 10, 2022Updated 3 years ago
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆89Feb 26, 2025Updated last year
- Github repo for NeurIPS 2024 paper "Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models"☆28Dec 21, 2025Updated 4 months ago
- Accept by CVPR 2025 (highlight)☆25Jun 8, 2025Updated 11 months ago
- ☆29Mar 20, 2024Updated 2 years ago
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- The code of "Hide in Thicket: Generating Imperceptible and Rational Adversarial Perturbations on 3D Point Clouds" CVPR 2024☆35Mar 23, 2024Updated 2 years ago
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆28Jun 11, 2025Updated 10 months ago
- Papers from our SoK on Red-Teaming (Accepted at TMLR)☆43May 2, 2026Updated last week
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆162Nov 30, 2024Updated last year
- Code for Transferable Unlearnable Examples☆22Mar 11, 2023Updated 3 years ago
- Hyperbolic Safety-Aware Vision-Language Models. CVPR 2025☆30Apr 8, 2025Updated last year
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- Code for ICLR 2025 Failures to Find Transferable Image Jailbreaks Between Vision-Language Models☆36Jun 1, 2025Updated 11 months ago
- Official Code and data for ACL 2024 finding, "An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language Models"☆25Nov 10, 2024Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Code for the CVPR 2020 article "Adversarial Vertex mixup: Toward Better Adversarially Robust Generalization"☆12Jul 13, 2020Updated 5 years ago
- Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Models☆32Dec 30, 2024Updated last year
- ☆109Feb 16, 2024Updated 2 years ago
- Accepted by ECCV 2024☆203Oct 15, 2024Updated last year
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆73Feb 9, 2026Updated 3 months ago
- The official implementation for SETA (TIP 2024).☆11Feb 17, 2025Updated last year
- [ICCVW 2025 (Oral)] Robust-LLaVA: On the Effectiveness of Large-Scale Robust Image Encoders for Multi-modal Large Language Models☆29Oct 20, 2025Updated 6 months ago