SAP-archive / ml-model-watermarking
Protect your machine learning models easily and securely with watermarking π
β86Updated 4 months ago
Related projects: β
- Implementation of "Adversarial Frontier Stitching for Remote Neural Network Watermarking" in TensorFlow.β23Updated 3 years ago
- The official implementation of the IEEE S&P`22 paper "SoK: How Robust is Deep Neural Network Image Classification Watermarking".β99Updated last year
- Code for our S&P'21 paper: Adversarial Watermarking Transformer: Towards Tracing Text Provenance with Data Hidingβ45Updated last year
- β44Updated 3 years ago
- The implement of paper "How to Prove Your Model Belongs to You: A Blind-Watermark based Framework to Protect Intellectual Property of DNNβ¦β23Updated 3 years ago
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systemsβ23Updated 3 years ago
- Implementations of data poisoning attacks against neural networks and related defenses.β63Updated 2 months ago
- This code is the official implementation of WEvade.β33Updated 6 months ago
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protectiβ¦β47Updated 6 months ago
- Website & Documentation: https://sbaresearch.github.io/model-watermarking/β20Updated 11 months ago
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.β25Updated last month
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)β111Updated last month
- β15Updated 2 years ago
- β24Updated last year
- β15Updated 2 years ago
- Watermarking against model extraction attacks in MLaaS. ACM MM 2021.β32Updated 3 years ago
- Implemention of "Robust Watermarking of Neural Network with Exponential Weighting" in TensorFlow.β12Updated 3 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779β66Updated 5 months ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matchingβ91Updated last month
- β72Updated 3 years ago
- Invisible Backdoor Attack with Sample-Specific Triggersβ85Updated 2 years ago
- β22Updated 5 years ago
- β42Updated 11 months ago
- β59Updated 7 months ago
- β17Updated 2 years ago
- β41Updated 3 weeks ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"β47Updated 4 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''β49Updated last year
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networksβ17Updated 5 years ago
- β19Updated 4 years ago