[ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking Multimodal Large Language Models''
β39Oct 17, 2024Updated last year
Alternatives and similar repositories for HADES
Users that are interested in HADES are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ICLR 2024 Spotlight π₯ ] - [ Best Paper Award SoCal NLP 2023 π] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modalβ¦β81Jun 6, 2024Updated last year
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Modelsβ270May 13, 2024Updated last year
- [CVPRW 2025] Official repository of paper titled "Towards Evaluating the Robustness of Visual State Space Models"β26Jun 8, 2025Updated 10 months ago
- Official repository for the paper "Gradient-based Jailbreak Images for Multimodal Fusion Models" (https//arxiv.org/abs/2410.03489)β19Oct 22, 2024Updated last year
- β27Jun 5, 2024Updated last year
- Managed Database hosting by DigitalOcean β’ AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- We introduce new approach, Token Reduction using CLIP Metric (TRIM), aimed at improving the efficiency of MLLMs without sacrificing theirβ¦β22Jan 11, 2026Updated 3 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking β¦β35Oct 23, 2024Updated last year
- [ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.β86Jan 19, 2025Updated last year
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liuβ27Aug 27, 2024Updated last year
- [ECCVW 2024 -- ORAL] Official repository of paper titled "Makeup-Guided Facial Privacy Protection via Untrained Neural Network Priors".β12Oct 11, 2024Updated last year
- [BMVC 2024] On Evaluating Adversarial Robustness of Volumetric Medical Segmentation Modelsβ15Nov 1, 2024Updated last year
- β55Dec 7, 2024Updated last year
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignmentβ27Jun 11, 2025Updated 10 months ago
- Code for ICCV2025 paperββIDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselvesβ17Jul 11, 2025Updated 9 months ago
- Wordpress hosting with auto-scaling - Free Trial β’ AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Code repository for the paper "Heuristic Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models"β16Aug 7, 2025Updated 8 months ago
- β22Oct 25, 2024Updated last year
- Accepted by ECCV 2024β200Oct 15, 2024Updated last year
- Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Modelsβ32Dec 30, 2024Updated last year
- β99Oct 15, 2023Updated 2 years ago
- Official Pytorch implementation of 'Facing the Elephant in the Room: Visual Prompt Tuning or Full Finetuning'? (ICLR2024)β13Mar 8, 2024Updated 2 years ago
- Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]β72Sep 6, 2023Updated 2 years ago
- β11Oct 29, 2024Updated last year
- ECSO (Make MLLM safe without neither training nor any external models!) (https://arxiv.org/abs/2403.09572)β36Nov 2, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways β’ AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- [ICCV-2025] Universal Adversarial Attack, Multimodal Adversarial Attacks, VLP models, Contrastive Learning, Cross-modal Perturbation Geneβ¦β37Jul 10, 2025Updated 9 months ago
- [ICLR 2025] BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacksβ31Nov 2, 2025Updated 5 months ago
- An Open-source Factuality Evaluation Demo for LLMsβ32Feb 23, 2026Updated last month
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"β87Nov 28, 2023Updated 2 years ago
- β75Mar 30, 2025Updated last year
- [CVPR 2023] Bridging Precision and Confidence: A Train-Time Loss for Calibrating Object Detection