[NeurIPS-2023] Annual Conference on Neural Information Processing Systems
☆227Dec 22, 2024Updated last year
Alternatives and similar repositories for AttackVLM
Users that are interested in AttackVLM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆109Feb 16, 2024Updated 2 years ago
- Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]☆72Sep 6, 2023Updated 2 years ago
- ☆55Dec 7, 2024Updated last year
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆67Mar 22, 2025Updated last year
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆270May 13, 2024Updated last year
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- official PyTorch implement of Towards Adversarial Attack on Vision-Language Pre-training Models☆67Mar 20, 2023Updated 3 years ago
- [ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models☆158Feb 19, 2026Updated last month
- CVPR 2025 - Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models☆71Aug 7, 2025Updated 8 months ago
- [CVPR'25]Chain of Attack: On the Robustness of Vision-Language Models Against Transfer-Based Adversarial Attacks☆31Jun 12, 2025Updated 10 months ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆87Nov 28, 2023Updated 2 years ago
- Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment (NeurIPS 2025)☆60Nov 5, 2025Updated 5 months ago
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆117Mar 26, 2024Updated 2 years ago
- ☆46Jun 11, 2023Updated 2 years ago
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆61Apr 8, 2024Updated 2 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆531Apr 6, 2026Updated last week
- ☆59Jun 5, 2024Updated last year
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆24May 20, 2025Updated 10 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆35Oct 23, 2024Updated last year
- Code of the paper: A Recipe for Watermarking Diffusion Models☆152Nov 13, 2024Updated last year
- [NeurIPS25 & ICML25 Workshop on Reliable and Responsible Foundation Models] A Simple Baseline Achieving Over 90% Success Rate Against the…☆91Feb 3, 2026Updated 2 months ago
- ☆60Aug 11, 2024Updated last year
- Code for the paper "Better Diffusion Models Further Improve Adversarial Training" (ICML 2023)☆145Jul 31, 2023Updated 2 years ago
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆200Jun 26, 2025Updated 9 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Accepted by ECCV 2024☆200Oct 15, 2024Updated last year
- [CVPR-2023] The IEEE/CVF Conference on Computer Vision and Pattern Recognition☆24Sep 18, 2023Updated 2 years ago
- [ECCV2024] Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial Trajector…☆31Nov 15, 2025Updated 5 months ago
- Official codebase for Image Hijacks: Adversarial Images can Control Generative Models at Runtime☆54Sep 19, 2023Updated 2 years ago
- Code for our NeurIPS 2020 paper Backpropagating Linearly Improves Transferability of Adversarial Examples.☆41Feb 10, 2023Updated 3 years ago
- [ArXiv 2025] Denial-of-Service Poisoning Attacks on Large Language Models☆23Oct 22, 2024Updated last year
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆59Dec 20, 2024Updated last year
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- ☆81Jul 23, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- [TMLR 2025] On Memorization in Diffusion Models☆31Oct 5, 2023Updated 2 years ago
- Code repository for CVPR2024 paper 《Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness》☆25May 29, 2024Updated last year
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆69Oct 23, 2024Updated last year
- ☆197Apr 7, 2025Updated last year
- Minimizing Maximum Model Discrepancy for Transferable Black-box Targeted Attacks(CVPR2023)☆19Jun 19, 2023Updated 2 years ago
- TransferAttack is a pytorch framework to boost the adversarial transferability for image classification.☆469Feb 27, 2026Updated last month
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆83Oct 23, 2024Updated last year