ZJZAC / awesome-deep-model-IP-protection
☆35Updated 3 years ago
Alternatives and similar repositories for awesome-deep-model-IP-protection
Users that are interested in awesome-deep-model-IP-protection are comparing it to the libraries listed below
Sorting:
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.☆32Updated 9 months ago
- Code Repo for the NeurIPS 2023 paper "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models"☆23Updated last week
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆49Updated last year
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆55Updated last year
- All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Model…☆11Updated 8 months ago
- [CCS'22] SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders☆19Updated 2 years ago
- ☆20Updated last year
- Official Code for ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users (NeurIPS 2024)☆16Updated 6 months ago
- ☆31Updated 3 years ago
- [IEEE S&P 2024] Exploring the Orthogonality and Linearity of Backdoor Attacks☆25Updated last month
- Official Implementation for "Towards Reliable Verification of Unauthorized Data Usage in Personalized Text-to-Image Diffusion Models" (IE…☆18Updated last month
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆36Updated 6 months ago
- The official implementation of the paper "Free Fine-tuning: A Plug-and-Play Watermarking Scheme for Deep Neural Networks".☆18Updated last year
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆28Updated 2 months ago
- The official implementation of "Intellectual Property Protection of Diffusion Models via the Watermark Diffusion Process"☆21Updated 2 months ago
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆23Updated last year
- ☆18Updated 2 years ago
- The code for ACM MM2024 (Multimodal Unlearnable Examples: Protecting Data against Multimodal Contrastive Learning)☆13Updated 9 months ago
- Official repo to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023☆89Updated last week
- ☆19Updated 2 years ago
- ☆17Updated 3 years ago
- CVPR 2025 - Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models☆29Updated 2 months ago
- ☆44Updated last year
- ☆17Updated last month
- Invisible Backdoor Attack with Sample-Specific Triggers☆93Updated 2 years ago
- Code for "Adversarial Illusions in Multi-Modal Embeddings"☆21Updated 9 months ago
- Website & Documentation: https://sbaresearch.github.io/model-watermarking/☆23Updated last year
- Source code and scripts for the paper "Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks"☆17Updated 5 months ago
- Watermarking LLM papers up-to-date☆13Updated last year
- ☆20Updated last year