tribhuvanesh / prediction-poisoning
Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)
☆29Updated 4 years ago
Related projects ⓘ
Alternatives and complementary repositories for prediction-poisoning
- Code for the paper: Label-Only Membership Inference Attacks☆64Updated 3 years ago
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆27Updated 3 years ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆46Updated 6 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆31Updated 4 years ago
- Code for "Label-Consistent Backdoor Attacks"☆49Updated 4 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆28Updated 2 years ago
- ☆19Updated 2 years ago
- ☆19Updated 2 months ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆50Updated 2 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆69Updated 7 months ago
- Anti-Backdoor learning (NeurIPS 2021)☆78Updated last year
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆31Updated 2 years ago
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆41Updated 2 years ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆68Updated 6 years ago
- ☆48Updated 2 years ago
- ☆32Updated 2 months ago
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Updated 2 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆46Updated 2 years ago
- Code for ML Doctor☆86Updated 3 months ago
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆24Updated 3 years ago
- ☆41Updated last year
- Defending against Model Stealing via Verifying Embedded External Features☆32Updated 2 years ago
- ☆24Updated 3 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Updated 5 years ago
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆31Updated last month
- ☆23Updated 2 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆44Updated 4 years ago
- ☆19Updated last year
- ATTA (Efficient Adversarial Training with Transferable Adversarial Examples)☆32Updated 4 years ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 5 years ago