ZJZAC / awesome-deep-model-IP-protectionLinks
☆40Updated 3 years ago
Alternatives and similar repositories for awesome-deep-model-IP-protection
Users that are interested in awesome-deep-model-IP-protection are comparing it to the libraries listed below
Sorting:
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.☆32Updated 11 months ago
- Official Implementation for "Towards Reliable Verification of Unauthorized Data Usage in Personalized Text-to-Image Diffusion Models" (IE…☆20Updated 4 months ago
- ☆24Updated 2 years ago
- The official implementation of the IEEE S&P`22 paper "SoK: How Robust is Deep Neural Network Image Classification Watermarking".☆116Updated 2 years ago
- ☆20Updated last year
- Invisible Backdoor Attack with Sample-Specific Triggers☆97Updated 3 years ago
- Website & Documentation: https://sbaresearch.github.io/model-watermarking/☆24Updated last year
- ☆44Updated last year
- Implementation of IEEE TNNLS 2023 and Elsevier PR 2023 papers on backdoor watermarking for deep classification models with unambiguity an…☆18Updated 2 years ago
- ☆31Updated 3 years ago
- [CCS'22] SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders☆20Updated 3 years ago
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆39Updated 9 months ago
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆127Updated 8 months ago
- [AAAI 2024] Data-Free Hard-Label Robustness Stealing Attack☆15Updated last year
- ☆24Updated last year
- ☆18Updated 3 years ago
- ☆27Updated 2 years ago
- ☆21Updated 2 years ago
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆29Updated 5 months ago
- Code Repo for the NeurIPS 2023 paper "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models"☆25Updated 3 months ago
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆57Updated last year
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Updated 2 years ago
- ☆82Updated 4 years ago
- Defending against Model Stealing via Verifying Embedded External Features☆36Updated 3 years ago
- All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Model…☆11Updated 10 months ago
- Official repo to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023☆92Updated 3 months ago
- Official Code for ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users (NeurIPS 2024)☆16Updated 9 months ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 6 years ago
- [AAAI'21] Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification☆29Updated 7 months ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Updated last month