ebagdasa / adversarial_illusions
Code for "Adversarial Illusions in Multi-Modal Embeddings"
☆18Updated 6 months ago
Alternatives and similar repositories for adversarial_illusions:
Users that are interested in adversarial_illusions are comparing it to the libraries listed below
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆25Updated 3 months ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated 2 years ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆41Updated last month
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆22Updated last year
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Updated 2 years ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆18Updated 10 months ago
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆18Updated last year
- ☆30Updated 7 months ago
- Official repo to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023☆86Updated 5 months ago
- ☆27Updated last month
- ☆31Updated 8 months ago
- Camouflage poisoning via machine unlearning☆16Updated 2 years ago
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆32Updated 5 months ago
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆21Updated last year
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.☆30Updated 6 months ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆14Updated 2 years ago
- ☆13Updated 3 years ago
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆26Updated 5 months ago
- [CVPR 2024] Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers☆16Updated 3 months ago
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆26Updated this week
- ☆31Updated 2 years ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆26Updated last year
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆49Updated 10 months ago
- [CCS'22] SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders☆18Updated 2 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆18Updated this week
- Code Repo for the NeurIPS 2023 paper "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models"☆22Updated 5 months ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆54Updated 2 years ago
- ☆57Updated 2 years ago
- Official implementation of NeurIPS'24 paper "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Model…☆38Updated 3 months ago