erfanshayegani / Jailbreak-In-Pieces
[ICLR 2024 Spotlight π₯ ] - [ Best Paper Award SoCal NLP 2023 π] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal Language Models
β38Updated 8 months ago
Alternatives and similar repositories for Jailbreak-In-Pieces:
Users that are interested in Jailbreak-In-Pieces are comparing it to the libraries listed below
- β27Updated 2 months ago
- Accepted by ECCV 2024β99Updated 4 months ago
- β40Updated 6 months ago
- β34Updated 2 months ago
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2β¦β45Updated 3 months ago
- β30Updated 7 months ago
- β31Updated 8 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking β¦β16Updated 3 months ago
- A package that achieves 95%+ transfer attack success rate against GPT-4β17Updated 3 months ago
- [ECCV2024] Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial Trajectorβ¦β21Updated 2 months ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"β41Updated last month
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking β¦β22Updated 4 months ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Imagesβ26Updated last year
- Universal Adversarial Attack, Multimodal Adversarial Attacks, VLP models, Contrastive Learning, Cross-modal Perturbation Generator, Generβ¦β13Updated 4 months ago
- [ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.β55Updated last month
- β19Updated 5 months ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"β75Updated last year
- β38Updated last month
- ECSO (Make MLLM safe without neither training nor any external models!) (https://arxiv.org/abs/2403.09572)β21Updated 3 months ago
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Modelsβ49Updated 10 months ago
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Promptsβ111Updated 2 months ago
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shiβ¦β51Updated 7 months ago
- Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]β53Updated last year
- β65Updated 6 months ago
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Modelsβ44Updated 2 months ago
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and furβ¦β47Updated 7 months ago
- A Survey on Jailbreak Attacks and Defenses against Multimodal Generative Modelsβ124Updated 2 weeks ago
- β92Updated last year
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liuβ26Updated 5 months ago
- [NeurIPS-2023] Annual Conference on Neural Information Processing Systemsβ179Updated last month