yunqing-me / AttackVLMView external linksLinks
[NeurIPS-2023] Annual Conference on Neural Information Processing Systems
☆227Dec 22, 2024Updated last year
Alternatives and similar repositories for AttackVLM
Users that are interested in AttackVLM are comparing it to the libraries listed below
Sorting:
- ☆109Feb 16, 2024Updated last year
- Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]☆71Sep 6, 2023Updated 2 years ago
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆266May 13, 2024Updated last year
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆66Mar 22, 2025Updated 10 months ago
- ☆55Dec 7, 2024Updated last year
- official PyTorch implement of Towards Adversarial Attack on Vision-Language Pre-training Models☆65Mar 20, 2023Updated 2 years ago
- Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment (NeurIPS 2025)☆46Nov 5, 2025Updated 3 months ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆86Nov 28, 2023Updated 2 years ago
- CVPR 2025 - Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models☆66Aug 7, 2025Updated 6 months ago
- [ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models☆156Jun 5, 2025Updated 8 months ago
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆24May 20, 2025Updated 8 months ago
- [CVPR'25]Chain of Attack: On the Robustness of Vision-Language Models Against Transfer-Based Adversarial Attacks☆29Jun 12, 2025Updated 8 months ago
- ☆57Jun 5, 2024Updated last year
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆118Mar 26, 2024Updated last year
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆490Jan 27, 2026Updated 2 weeks ago
- Code for the paper "Better Diffusion Models Further Improve Adversarial Training" (ICML 2023)☆146Jul 31, 2023Updated 2 years ago
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆34Oct 23, 2024Updated last year
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆60Apr 8, 2024Updated last year
- ☆45Jun 11, 2023Updated 2 years ago
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆191Jun 26, 2025Updated 7 months ago
- Accepted by ECCV 2024☆186Oct 15, 2024Updated last year
- ☆58Aug 11, 2024Updated last year
- Code for our NeurIPS 2020 paper Backpropagating Linearly Improves Transferability of Adversarial Examples.☆42Feb 10, 2023Updated 3 years ago
- ☆80Jul 23, 2024Updated last year
- Official codebase for Image Hijacks: Adversarial Images can Control Generative Models at Runtime☆54Sep 19, 2023Updated 2 years ago
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆58Dec 20, 2024Updated last year
- [TMLR 2025] On Memorization in Diffusion Models☆30Oct 5, 2023Updated 2 years ago
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆123Feb 19, 2025Updated 11 months ago
- [NeurIPS25 & ICML25 Workshop on Reliable and Responsible Foundation Models] A Simple Baseline Achieving Over 90% Success Rate Against the…☆87Feb 3, 2026Updated last week
- Code of the paper: A Recipe for Watermarking Diffusion Models☆155Nov 13, 2024Updated last year
- [ArXiv 2025] Denial-of-Service Poisoning Attacks on Large Language Models☆23Oct 22, 2024Updated last year
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆68Oct 23, 2024Updated last year
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆227Feb 3, 2026Updated last week
- [NeurIPS 2022] "Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets" by Ruisi Cai*, Zhenyu Zh…☆21Oct 1, 2022Updated 3 years ago
- [ICML 2025] X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP☆37Feb 3, 2026Updated last week
- List of T2I safety papers, updated daily, welcome to discuss using Discussions☆67Aug 12, 2024Updated last year
- TransferAttack is a pytorch framework to boost the adversarial transferability for image classification.☆440Jan 16, 2026Updated last month
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆109Sep 27, 2024Updated last year