[ICLR 2025] BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks
☆30Nov 2, 2025Updated 4 months ago
Alternatives and similar repositories for BlueSuffix
Users that are interested in BlueSuffix are comparing it to the libraries listed below
Sorting:
- Accept by CVPR 2025 (highlight)☆22Jun 8, 2025Updated 8 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆35Oct 23, 2024Updated last year
- Code for the paper "Jailbreak Large Vision-Language Models Through Multi-Modal Linkage"☆27Dec 6, 2024Updated last year
- [ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal…☆80Jun 6, 2024Updated last year
- ☆18Mar 30, 2025Updated 11 months ago
- ☆73Mar 30, 2025Updated 11 months ago
- Imbalanced Gradients: A New Cause of Overestimated Adversarial Robustness. (MD attacks)☆11Aug 29, 2020Updated 5 years ago
- Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Models☆31Dec 30, 2024Updated last year
- Code for ICCV2025 paper——IDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselves☆17Jul 11, 2025Updated 7 months ago
- [ICLR2025] Detecting Backdoor Samples in Contrastive Language Image Pretraining☆19Feb 26, 2025Updated last year
- Official repository for the paper "Gradient-based Jailbreak Images for Multimodal Fusion Models" (https//arxiv.org/abs/2410.03489)☆19Oct 22, 2024Updated last year
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆36Oct 29, 2025Updated 4 months ago
- ☆21Oct 25, 2024Updated last year
- The code for ACM MM2024 (Multimodal Unlearnable Examples: Protecting Data against Multimodal Contrastive Learning)☆15Jul 18, 2024Updated last year
- Adversarial Examples Detection Benchmark☆17Dec 6, 2024Updated last year
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆107May 20, 2025Updated 9 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆38Oct 17, 2024Updated last year
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆27Jun 11, 2025Updated 8 months ago
- [ACM MM 2024] ReToMe-VA: Recursive Token Merging for Video Diffusion-based Unrestricted Adversarial Attack☆14Dec 20, 2024Updated last year
- Code repository for the paper "Heuristic Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models"☆15Aug 7, 2025Updated 6 months ago
- ☆24Jul 25, 2024Updated last year
- [NeurIPS 2025] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models☆276Feb 2, 2026Updated last month
- This repo investigates LLMs' tendency to exhibit acquiescence bias in sequential QA interactions. Includes evaluation methods, datasets, …☆49Sep 23, 2025Updated 5 months ago
- The official repository for guided jailbreak benchmark☆29Jul 28, 2025Updated 7 months ago
- ☆25Feb 19, 2025Updated last year
- ☆23Jun 13, 2024Updated last year
- [ICLR 2025] PyTorch Implementation of "ETA: Evaluating Then Aligning Safety of Vision Language Models at Inference Time"☆29Jul 20, 2025Updated 7 months ago
- ☆59Jun 5, 2024Updated last year
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆192Jun 26, 2025Updated 8 months ago
- [NeurIPS 2024] Lumen: a Large multimodal model with versatile vision-centric capabilities☆25Sep 27, 2024Updated last year
- ☆26Jun 5, 2024Updated last year
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆69Oct 23, 2024Updated last year
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆66Aug 25, 2024Updated last year
- ECCV2024: Adversarial Prompt Tuning for Vision-Language Models☆31Nov 19, 2024Updated last year
- ACL 2025 (Main) HiddenDetect: Detecting Jailbreak Attacks against Multimodal Large Language Models via Monitoring Hidden States☆159Jun 8, 2025Updated 8 months ago
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆71Feb 9, 2026Updated 3 weeks ago
- Irolyn is a jailbreak repo extractor for iOS 18 to iOS 18.5 and iPadOS 18 to iPadOS 18.5 .☆12May 15, 2025Updated 9 months ago
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆355Jun 13, 2025Updated 8 months ago
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆228Feb 3, 2026Updated last month