Vinsonzyh / BlueSuffixView external linksLinks
[ICLR 2025] BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks
☆30Nov 2, 2025Updated 3 months ago
Alternatives and similar repositories for BlueSuffix
Users that are interested in BlueSuffix are comparing it to the libraries listed below
Sorting:
- Accept by CVPR 2025 (highlight)☆22Jun 8, 2025Updated 8 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆34Oct 23, 2024Updated last year
- Code for the paper "Jailbreak Large Vision-Language Models Through Multi-Modal Linkage"☆26Dec 6, 2024Updated last year
- [ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal…☆79Jun 6, 2024Updated last year
- ☆18Mar 30, 2025Updated 10 months ago
- ☆71Mar 30, 2025Updated 10 months ago
- Code for Transferable Unlearnable Examples☆22Mar 11, 2023Updated 2 years ago
- Imbalanced Gradients: A New Cause of Overestimated Adversarial Robustness. (MD attacks)☆11Aug 29, 2020Updated 5 years ago
- Code for ICCV2025 paper——IDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselves☆17Jul 11, 2025Updated 7 months ago
- [ICLR2025] Detecting Backdoor Samples in Contrastive Language Image Pretraining☆19Feb 26, 2025Updated 11 months ago
- Official repository for the paper "Gradient-based Jailbreak Images for Multimodal Fusion Models" (https//arxiv.org/abs/2410.03489)☆19Oct 22, 2024Updated last year
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆36Oct 29, 2025Updated 3 months ago
- The code for ACM MM2024 (Multimodal Unlearnable Examples: Protecting Data against Multimodal Contrastive Learning)☆15Jul 18, 2024Updated last year
- ☆21Oct 25, 2024Updated last year
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆106May 20, 2025Updated 8 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆38Oct 17, 2024Updated last year
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆27Jun 11, 2025Updated 8 months ago
- Code repository for the paper "Heuristic Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models"☆15Aug 7, 2025Updated 6 months ago
- ☆24Feb 19, 2025Updated 11 months ago
- [NeurIPS 2025] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models☆274Feb 2, 2026Updated last week
- ☆23Jun 13, 2024Updated last year
- [ICLR 2025] PyTorch Implementation of "ETA: Evaluating Then Aligning Safety of Vision Language Models at Inference Time"☆30Jul 20, 2025Updated 6 months ago
- ☆57Jun 5, 2024Updated last year
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆191Jun 26, 2025Updated 7 months ago
- [NeurIPS 2024] Lumen: a Large multimodal model with versatile vision-centric capabilities☆25Sep 27, 2024Updated last year
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆68Oct 23, 2024Updated last year
- ☆26Jun 5, 2024Updated last year
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆65Aug 25, 2024Updated last year
- ECCV2024: Adversarial Prompt Tuning for Vision-Language Models☆30Nov 19, 2024Updated last year
- ACL 2025 (Main) HiddenDetect: Detecting Jailbreak Attacks against Multimodal Large Language Models via Monitoring Hidden States☆158Jun 8, 2025Updated 8 months ago
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆70Updated this week
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆340Jun 13, 2025Updated 8 months ago
- Irolyn is a jailbreak repo extractor for iOS 18 to iOS 18.5 and iPadOS 18 to iPadOS 18.5 .☆12May 15, 2025Updated 8 months ago
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆225Feb 3, 2026Updated last week
- EventHallusion: Diagnosing Event Hallucinations in Video LLMs☆34Aug 5, 2025Updated 6 months ago
- Strongest attack against Feature Scatter and Adversarial Interpolation☆25Dec 26, 2019Updated 6 years ago
- Code and data to go with the Zhu et al. paper "An Objective for Nuanced LLM Jailbreaks"☆36Dec 18, 2024Updated last year
- The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?☆42Nov 1, 2024Updated last year
- The official implementation of our NAACL 2024 paper "A Wolf in Sheep’s Clothing: Generalized Nested Jailbreak Prompts can Fool Large Lang…☆152Sep 2, 2025Updated 5 months ago