ghua-ac / DNN_Watermark
Implementation of IEEE TNNLS 2023 and Elsevier PR 2023 papers on backdoor watermarking for deep classification models with unambiguity and deep fidelity.
☆12Updated last year
Related projects ⓘ
Alternatives and complementary repositories for DNN_Watermark
- ☆16Updated last year
- ☆15Updated 2 years ago
- ☆17Updated 2 years ago
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.☆28Updated 3 months ago
- The implement of paper "How to Prove Your Model Belongs to You: A Blind-Watermark based Framework to Protect Intellectual Property of DNN…☆23Updated 3 years ago
- ☆19Updated 2 years ago
- ☆17Updated 2 years ago
- ☆15Updated 2 years ago
- ☆41Updated last year
- The official implementation of "Intellectual Property Protection of Diffusion Models via the Watermark Diffusion Process"☆18Updated 11 months ago
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆40Updated 2 years ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 5 years ago
- This is the implementation of our paper 'Open-sourced Dataset Protection via Backdoor Watermarking', accepted by the NeurIPS Workshop on …☆19Updated 3 years ago
- ☆24Updated last year
- ☆19Updated 4 years ago
- Implementation of "Adversarial Frontier Stitching for Remote Neural Network Watermarking" in TensorFlow.☆23Updated 3 years ago
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆27Updated 3 years ago
- The official implementation of the IEEE S&P`22 paper "SoK: How Robust is Deep Neural Network Image Classification Watermarking".☆103Updated last year
- ☆16Updated 6 months ago
- ☆19Updated 2 years ago
- Simple yet effective targeted transferable attack (NeurIPS 2021)☆47Updated last year
- Invisible Backdoor Attack with Sample-Specific Triggers☆88Updated 2 years ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated last year
- ☆30Updated 2 years ago
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Updated last year
- Code for Adv-watermark: A novel watermark perturbation for adversarial examples (ACM MM2020)☆40Updated 4 years ago
- ☆78Updated 3 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆31Updated 2 years ago
- ☆11Updated 10 months ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆28Updated 2 years ago