adversarial-for-goodness / Co-AttackView external linksLinks
official PyTorch implement of Towards Adversarial Attack on Vision-Language Pre-training Models
☆65Mar 20, 2023Updated 2 years ago
Alternatives and similar repositories for Co-Attack
Users that are interested in Co-Attack are comparing it to the libraries listed below
Sorting:
- Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]☆71Sep 6, 2023Updated 2 years ago
- ☆20Jan 15, 2024Updated 2 years ago
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆66Mar 22, 2025Updated 10 months ago
- [ECCV2024] Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial Trajector…☆30Nov 15, 2025Updated 2 months ago
- [ICCV-2025] Universal Adversarial Attack, Multimodal Adversarial Attacks, VLP models, Contrastive Learning, Cross-modal Perturbation Gene…☆35Jul 10, 2025Updated 7 months ago
- [ICCV 2023] Structure Invariant Transformation for better Adversarial Transferability☆25Feb 23, 2024Updated last year
- [NeurIPS-2023] Annual Conference on Neural Information Processing Systems☆227Dec 22, 2024Updated last year
- ☆74Jan 21, 2026Updated 3 weeks ago
- ☆20Feb 3, 2025Updated last year
- [ACM MM 2023] Improving the Transferability of Adversarial Examples with Arbitrary Style Transfer.☆22Feb 23, 2024Updated last year
- ☆109Feb 16, 2024Updated last year
- ☆55Dec 7, 2024Updated last year
- Official codebase for Image Hijacks: Adversarial Images can Control Generative Models at Runtime☆54Sep 19, 2023Updated 2 years ago
- TransferAttack is a pytorch framework to boost the adversarial transferability for image classification.☆440Jan 16, 2026Updated 3 weeks ago
- Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment (NeurIPS 2025)☆46Nov 5, 2025Updated 3 months ago
- ☆26Dec 23, 2021Updated 4 years ago
- ECCV2024: Adversarial Prompt Tuning for Vision-Language Models☆30Nov 19, 2024Updated last year
- CVPR 2025 - Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models☆66Aug 7, 2025Updated 6 months ago
- ☆45Jun 11, 2023Updated 2 years ago
- Official Pytorch implementation for "Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization" (CVPR 20…☆28Jul 18, 2023Updated 2 years ago
- [NeurIPS 2023] Boosting Adversarial Transferability by Achieving Flat Local Maxima☆34Feb 23, 2024Updated last year
- Code repository for CVPR2024 paper 《Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness》☆25May 29, 2024Updated last year
- ☆16Jul 25, 2022Updated 3 years ago
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆266May 13, 2024Updated last year
- ☆80Jul 23, 2024Updated last year
- Universal Adversarial Perturbations for Vision-Language Pre-trained Models☆24Aug 8, 2025Updated 6 months ago
- Code for our ICLR 2023 paper Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples.☆18May 31, 2023Updated 2 years ago
- ☆12May 6, 2022Updated 3 years ago
- Divide-and-Conquer Attack: Harnessing the Power of LLM to Bypass the Censorship of Text-to-Image Generation Mode☆18Feb 16, 2025Updated 11 months ago
- The code for the paper titled as "DifAttack: Query-Efficient Black-Box Attack via Disentangled Feature Space".☆23Feb 10, 2025Updated last year
- Cross-Modal Transferable Adversarial Attacks from Images to Videos (CVPR 2022)☆20Jul 3, 2024Updated last year
- [CVPR 2023] Official implementation of the Clean Feature Mixup (CFM) method☆23May 25, 2023Updated 2 years ago
- [ICML 2025] X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP☆37Feb 3, 2026Updated last week
- Generalized Data-free Universal Adversarial Perturbations in PyTorch☆20Oct 9, 2020Updated 5 years ago
- ☆36Feb 23, 2024Updated last year
- ☆23Apr 10, 2023Updated 2 years ago
- [ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal…☆79Jun 6, 2024Updated last year
- The repo for paper: Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models.☆13Dec 16, 2024Updated last year
- Code repository for Blackbox Attacks via Surrogate Ensemble Search (BASES), NeurIPS 2022☆13Aug 6, 2024Updated last year