AoiDragon / HADES
[ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking Multimodal Large Language Models''
β21Updated 3 months ago
Alternatives and similar repositories for HADES:
Users that are interested in HADES are comparing it to the libraries listed below
- [ICLR 2024 Spotlight π₯ ] - [ Best Paper Award SoCal NLP 2023 π] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modalβ¦β38Updated 8 months ago
- ECSO (Make MLLM safe without neither training nor any external models!) (https://arxiv.org/abs/2403.09572)β20Updated 3 months ago
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Modelsβ43Updated last month
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"β76Updated last year
- β40Updated last year
- β27Updated 2 months ago
- [ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.β52Updated 3 weeks ago
- Official PyTorch implementation of "CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning" @ ICCV 2023β33Updated last year
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Modelsβ48Updated 10 months ago
- β29Updated 7 months ago
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shiβ¦β51Updated 7 months ago
- [ECCV-2024] Transferable Targeted Adversarial Attack, CLIP models, Generative adversarial network, Multi-target attacksβ27Updated 6 months ago
- Official implementation of NeurIPS'24 paper "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Modelβ¦β37Updated 3 months ago
- β34Updated 2 months ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"β40Updated last month
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Imagesβ25Updated last year
- This is a collection of awesome papers I have read (carefully or roughly) in the fields of security in diffusion models. Any suggestions β¦β24Updated 3 months ago
- ECCV2024: Adversarial Prompt Tuning for Vision-Language Modelsβ22Updated 2 months ago
- The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsβ¦β69Updated 3 months ago
- β10Updated this week
- Universal Adversarial Attack, Multimodal Adversarial Attacks, VLP models, Contrastive Learning, Cross-modal Perturbation Generator, Generβ¦β13Updated 3 months ago
- The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?β24Updated 3 months ago
- β58Updated 4 months ago
- Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]β51Updated last year
- [ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Modelsβ119Updated 3 months ago
- List of T2I safety papers, updated daily, welcome to discuss using Discussionsβ57Updated 6 months ago
- β20Updated 5 months ago
- This is the official repo of the paper "Latent Guard: a Safety Framework for Text-to-image Generation"β47Updated 3 months ago
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2β¦β44Updated 3 months ago
- β40Updated 6 months ago