π up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.
β542May 8, 2026Updated this week
Alternatives and similar repositories for Awesome-LVLM-Attack
Users that are interested in Awesome-LVLM-Attack are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Accepted by IJCAI-24 Survey Trackβ231Aug 25, 2024Updated last year
- β56Dec 7, 2024Updated last year
- β76Mar 30, 2025Updated last year
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2β¦β68Mar 22, 2025Updated last year
- CVPR 2025 - Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Modelsβ72Aug 7, 2025Updated 9 months ago
- Managed Database hosting by DigitalOcean β’ AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- β60Jun 5, 2024Updated last year
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Modelsβ275May 13, 2024Updated last year
- Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Modelsβ32Dec 30, 2024Updated last year
- Code for ICLR 2025 Failures to Find Transferable Image Jailbreaks Between Vision-Language Modelsβ36Jun 1, 2025Updated 11 months ago
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).β1,950May 2, 2026Updated last week
- A Survey on Jailbreak Attacks and Defenses against Multimodal Generative Modelsβ317Jan 11, 2026Updated 4 months ago
- [NeurIPS-2023] Annual Conference on Neural Information Processing Systemsβ226Dec 22, 2024Updated last year
- [ICLR 2024 Spotlight π₯ ] - [ Best Paper Award SoCal NLP 2023 π] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modalβ¦β81Jun 6, 2024Updated last year
- [ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Modelsβ158Feb 19, 2026Updated 2 months ago
- Wordpress hosting with auto-scaling - Free Trial Offer β’ AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment (NeurIPS 2025)β65Nov 5, 2025Updated 6 months ago
- Safety at Scale: A Comprehensive Survey of Large Model and Agent Safetyβ263Apr 12, 2026Updated 3 weeks ago
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provideβ¦β1,841Updated this week
- A curated list of resources dedicated to the safety of Large Vision-Language Models. This repository aligns with our survey titled A Survβ¦β201Feb 6, 2026Updated 3 months ago
- β168Sep 2, 2024Updated last year
- Accepted by ECCV 2024β204Oct 15, 2024Updated last year
- Code for paper "Membership Inference Attacks Against Vision-Language Models"β29Jan 25, 2025Updated last year
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Imagesβ42Jan 25, 2024Updated 2 years ago
- [NeurIPS25 & ICML25 Workshop on Reliable and Responsible Foundation Models] A Simple Baseline Achieving Over 90% Success Rate Against theβ¦β95Feb 3, 2026Updated 3 months ago
- Simple, predictable pricing with DigitalOcean hosting β’ AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- TransferAttack is a pytorch framework to boost the adversarial transferability for image classification.β467Feb 27, 2026Updated 2 months ago
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Modelsβ61Apr 8, 2024Updated 2 years ago
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agentsβ138Feb 19, 2025Updated last year
- β21Jan 15, 2024Updated 2 years ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking β¦β40Oct 17, 2024Updated last year
- [ECCV2024] Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial Trajectorβ¦β31Nov 15, 2025Updated 5 months ago
- official PyTorch implement of Towards Adversarial Attack on Vision-Language Pre-training Modelsβ68Mar 20, 2023Updated 3 years ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking β¦β35Oct 23, 2024Updated last year
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shiβ¦β73Feb 9, 2026Updated 3 months ago
- 1-Click AI Models by DigitalOcean Gradient β’ AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- [CVPR 2025] Official implementation for "Steering Away from Harm: An Adaptive Approach to Defending Vision Language Model Against Jailbreβ¦β60Jul 5, 2025Updated 10 months ago
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Promptsβ202Jun 26, 2025Updated 10 months ago
- Repository for the Paper: Leave My Images Alone: Preventing Multi-Modal Large Language Models from Analyzing Images via Visual Prompt Injβ¦β19Apr 17, 2026Updated 3 weeks ago
- β28Mar 16, 2025Updated last year
- β27Jun 5, 2024Updated last year
- β63Aug 11, 2024Updated last year
- [ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.β87Jan 19, 2025Updated last year