ebagdasa / adversarial_illusions
Code for "Adversarial Illusions in Multi-Modal Embeddings"
☆18Updated 7 months ago
Alternatives and similar repositories for adversarial_illusions:
Users that are interested in adversarial_illusions are comparing it to the libraries listed below
- ☆19Updated 11 months ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆44Updated 2 months ago
- ☆29Updated 2 years ago
- Code Repo for the NeurIPS 2023 paper "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models"☆23Updated 6 months ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆22Updated last year
- ☆33Updated 9 months ago
- ☆19Updated 6 months ago
- Official repo to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023☆87Updated 6 months ago
- ☆19Updated 3 months ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆25Updated 4 months ago
- ☆30Updated 3 years ago
- ☆27Updated 2 months ago
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆55Updated last year
- A package that achieves 95%+ transfer attack success rate against GPT-4☆17Updated 5 months ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆17Updated 7 months ago
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆50Updated this week
- ☆31Updated 8 months ago
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆26Updated 6 months ago
- Official Implementation for "Towards Reliable Verification of Unauthorized Data Usage in Personalized Text-to-Image Diffusion Models" (IE…☆13Updated 3 months ago
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆14Updated 2 years ago
- ☆23Updated last year
- ☆13Updated 3 years ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated 2 years ago
- This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning…☆17Updated last year
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆35Updated last year
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆51Updated 11 months ago
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆28Updated 3 weeks ago
- ☆12Updated 10 months ago
- code of paper "IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Gene…☆26Updated 10 months ago
- ☆93Updated last year