[NeurIPS-2023] Annual Conference on Neural Information Processing Systems
☆228Dec 22, 2024Updated last year
Alternatives and similar repositories for AttackVLM
Users that are interested in AttackVLM are comparing it to the libraries listed below
Sorting:
- ☆109Feb 16, 2024Updated 2 years ago
- Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]☆72Sep 6, 2023Updated 2 years ago
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆266May 13, 2024Updated last year
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆66Mar 22, 2025Updated 11 months ago
- ☆55Dec 7, 2024Updated last year
- official PyTorch implement of Towards Adversarial Attack on Vision-Language Pre-training Models☆65Mar 20, 2023Updated 2 years ago
- Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment (NeurIPS 2025)☆50Nov 5, 2025Updated 4 months ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆86Nov 28, 2023Updated 2 years ago
- CVPR 2025 - Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models☆66Aug 7, 2025Updated 7 months ago
- [ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models☆157Feb 19, 2026Updated 2 weeks ago
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆24May 20, 2025Updated 9 months ago
- [CVPR'25]Chain of Attack: On the Robustness of Vision-Language Models Against Transfer-Based Adversarial Attacks☆30Jun 12, 2025Updated 8 months ago
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆118Mar 26, 2024Updated last year
- ☆59Jun 5, 2024Updated last year
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆505Feb 17, 2026Updated 2 weeks ago
- Code for the paper "Better Diffusion Models Further Improve Adversarial Training" (ICML 2023)☆146Jul 31, 2023Updated 2 years ago
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …