Reference implementation of the PRADA model stealing defense. IEEE Euro S&P 2019.
☆35Mar 20, 2019Updated 6 years ago
Alternatives and similar repositories for prada-protecting-against-dnn-model-stealing-attacks
Users that are interested in prada-protecting-against-dnn-model-stealing-attacks are comparing it to the libraries listed below
Sorting:
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆33Nov 4, 2020Updated 5 years ago
- Watermarking against model extraction attacks in MLaaS. ACM MM 2021.☆34Jul 15, 2021Updated 4 years ago
- ☆50Feb 27, 2021Updated 5 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆31Dec 12, 2021Updated 4 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆76Apr 1, 2024Updated last year
- ☆18Nov 13, 2021Updated 4 years ago
- Concealed Data Poisoning Attacks on NLP Models☆21Sep 4, 2023Updated 2 years ago
- A novel data-free model stealing method based on GAN☆133Oct 11, 2022Updated 3 years ago
- ☆10Jan 3, 2023Updated 3 years ago
- ☆14Sep 17, 2024Updated last year
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Oct 10, 2022Updated 3 years ago
- Copycat CNN☆28Apr 17, 2024Updated last year
- ☆34Mar 28, 2022Updated 3 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆34Mar 28, 2020Updated 5 years ago
- ☆17Nov 30, 2022Updated 3 years ago
- This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks…☆128Jan 18, 2022Updated 4 years ago
- Protect your machine learning models easily and securely with watermarking 🔑☆97Apr 24, 2024Updated last year
- Code for "Label-Consistent Backdoor Attacks"☆57Nov 22, 2020Updated 5 years ago
- ☆20Aug 7, 2023Updated 2 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- The official implementation of the IEEE S&P`22 paper "SoK: How Robust is Deep Neural Network Image Classification Watermarking".☆117May 24, 2023Updated 2 years ago
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆39Jan 28, 2019Updated 7 years ago
- Implementation of IEEE TNNLS 2023 and Elsevier PR 2023 papers on backdoor watermarking for deep classification models with unambiguity an…☆19Jul 27, 2023Updated 2 years ago
- [CCS'22] SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders☆18Jul 12, 2022Updated 3 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Aug 24, 2022Updated 3 years ago
- ☆19Mar 5, 2018Updated 8 years ago
- ☆27Oct 17, 2022Updated 3 years ago
- This is the implementation of our paper 'Open-sourced Dataset Protection via Backdoor Watermarking', accepted by the NeurIPS Workshop on …☆23Oct 13, 2021Updated 4 years ago
- Website & Documentation: https://sbaresearch.github.io/model-watermarking/☆25Sep 22, 2023Updated 2 years ago
- [NeurIPS 2019] This is the code repo of our novel passport-based DNN ownership verification schemes, i.e. we embed passport layer into va…☆85Aug 29, 2023Updated 2 years ago
- ☆94Mar 23, 2021Updated 4 years ago
- Implement adversarial arrack on Recurrent Neural network built to perform sentiment analysis with LSTM using TensorFlow☆21Oct 6, 2018Updated 7 years ago
- IDA (sort of) headless☆27Feb 17, 2024Updated 2 years ago
- ☆23Aug 24, 2020Updated 5 years ago
- The official implementation codes of greedy residuals for the paper Watermarking Deep Neural Networks with Greedy Residuals (ICML 2021).☆24May 21, 2022Updated 3 years ago
- This repository provides a PyTorch implementation of "Fooling Neural Network Interpretations via Adversarial Model Manipulation". Our pap…☆23Dec 19, 2020Updated 5 years ago
- Implementation of "Adversarial Frontier Stitching for Remote Neural Network Watermarking" in TensorFlow.☆24Aug 30, 2021Updated 4 years ago
- ☆24Apr 14, 2019Updated 6 years ago
- Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"☆26Jan 7, 2022Updated 4 years ago