jiawangbai / BadCLIP
Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf
☆20Updated last year
Alternatives and similar repositories for BadCLIP:
Users that are interested in BadCLIP are comparing it to the libraries listed below
- ☆22Updated 8 months ago
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆23Updated last year
- CVPR 2025 - Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models☆26Updated 2 months ago
- ☆72Updated 9 months ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆33Updated last year
- [BMVC 2023] Backdoor Attack on Hash-based Image Retrieval via Clean-label Data Poisoning☆15Updated last year
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆49Updated last year
- [CVPR 2024] Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers☆17Updated 6 months ago
- [NeurIPS 2023] Codes for DiffAttack: Evasion Attacks Against Diffusion-Based Adversarial Purification☆31Updated last year
- Code of paper [CVPR'24: Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?]☆17Updated last year
- Universal Adversarial Attack, Multimodal Adversarial Attacks, VLP models, Contrastive Learning, Cross-modal Perturbation Generator, Gener…☆17Updated 6 months ago
- Official implementation of the ICCV2023 paper: Enhancing Generalization of Universal Adversarial Perturbation through Gradient Aggregatio…☆23Updated last year
- AdvDiffuser: Natural Adversarial Example Synthesis with Diffusion Models (ICCV 2023)☆18Updated last year
- Code Repo for the NeurIPS 2023 paper "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models"☆23Updated 7 months ago
- ☆40Updated 11 months ago
- ☆9Updated 7 months ago
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆26Updated 8 months ago
- ☆32Updated 9 months ago
- ☆29Updated 3 weeks ago
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆28Updated 2 months ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Updated 2 years ago
- Code for Transferable Unlearnable Examples☆19Updated 2 years ago
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Updated 7 months ago
- All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Model…☆11Updated 7 months ago
- Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]☆60Updated last year
- Minimizing Maximum Model Discrepancy for Transferable Black-box Targeted Attacks(CVPR2023)☆17Updated last year
- A curated list of papers for the transferability of adversarial examples☆65Updated 9 months ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated 2 years ago
- APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)☆30Updated 3 weeks ago
- A collection of resources on attacks and defenses targeting text-to-image diffusion models☆66Updated last month