ebagdasa / adversarial_illusions
Code for "Adversarial Illusions in Multi-Modal Embeddings"
☆19Updated 5 months ago
Alternatives and similar repositories for adversarial_illusions:
Users that are interested in adversarial_illusions are comparing it to the libraries listed below
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆25Updated 2 months ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆34Updated last year
- ☆28Updated 7 months ago
- ☆16Updated 8 months ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆18Updated 9 months ago
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆14Updated 2 years ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆29Updated 2 years ago
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆53Updated 9 months ago
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆46Updated 9 months ago
- Official repo to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023☆85Updated 4 months ago
- ☆30Updated 2 years ago
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆41Updated 2 months ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆14Updated 2 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆32Updated 3 months ago
- This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023…☆20Updated last year
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆25Updated 4 months ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆36Updated this week
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆57Updated last year
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆18Updated last year
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated 2 years ago
- The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consist…☆21Updated last year
- ☆29Updated 2 years ago
- ☆18Updated 2 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆17Updated 11 months ago
- ☆24Updated this week
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆25Updated this week
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆22Updated last year
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆18Updated last year
- ☆40Updated 5 months ago
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆36Updated 6 months ago