Manu21JC / DataElixirLinks
[AAAI 2024] DataElixir: Purifying Poisoned Dataset to Mitigate Backdoor Attacks via Diffusion Models
☆11Updated 9 months ago
Alternatives and similar repositories for DataElixir
Users that are interested in DataElixir are comparing it to the libraries listed below
Sorting:
- ☆21Updated last year
- ☆27Updated 2 years ago
- Official repo to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023☆92Updated last week
- ☆41Updated 11 months ago
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆31Updated last month
- The official implementation of "Intellectual Property Protection of Diffusion Models via the Watermark Diffusion Process"☆21Updated 7 months ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Updated 2 months ago
- ☆25Updated 2 years ago
- A collection of resources on attacks and defenses targeting text-to-image diffusion models☆74Updated 6 months ago
- Defending against Model Stealing via Verifying Embedded External Features☆38Updated 3 years ago
- Invisible Backdoor Attack with Sample-Specific Triggers☆99Updated 3 years ago
- [USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models☆45Updated 8 months ago
- Website & Documentation: https://sbaresearch.github.io/model-watermarking/☆24Updated 2 years ago
- [NeurIPS'2023] Official Code Repo:Diffusion-Based Adversarial Sample Generation for Improved Stealthiness and Controllability☆110Updated last year
- A list of recent papers about adversarial learning☆214Updated this week
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.☆33Updated last year
- code of paper "IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Gene…☆35Updated last year
- ☆43Updated last year
- Code release for DeepJudge (S&P'22)☆51Updated 2 years ago
- [AAAI 2024] Data-Free Hard-Label Robustness Stealing Attack☆15Updated last year
- [AAAI 2023] Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network☆30Updated 11 months ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆21Updated last year
- ☆59Updated 2 years ago
- The official implementation of the IEEE S&P`22 paper "SoK: How Robust is Deep Neural Network Image Classification Watermarking".☆116Updated 2 years ago
- ☆116Updated this week
- Revisiting Transferable Adversarial Images (TPAMI 2025)☆130Updated 2 weeks ago
- ☆223Updated last month
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆26Updated last year
- A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.☆182Updated 6 months ago
- Code for "Adversarial Illusions in Multi-Modal Embeddings"☆27Updated last year