MKariya1998 / GMI-AttackLinks
☆11Updated 4 years ago
Alternatives and similar repositories for GMI-Attack
Users that are interested in GMI-Attack are comparing it to the libraries listed below
Sorting:
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆74Updated last year
- This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks…☆124Updated 3 years ago
- ☆84Updated 4 years ago
- ☆45Updated 2 years ago
- Official Repository for the AAAI-20 paper "Hidden Trigger Backdoor Attacks"☆130Updated 2 years ago
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆58Updated last year
- Defending against Model Stealing via Verifying Embedded External Features☆38Updated 3 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆30Updated 3 years ago
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆35Updated last year
- Invisible Backdoor Attack with Sample-Specific Triggers☆102Updated 3 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆59Updated 11 months ago
- ☆32Updated 3 years ago
- Code for "Variational Model Inversion Attacks" Wang et al., NeurIPS2021☆22Updated 3 years ago
- ☆22Updated 5 years ago
- ☆25Updated last year
- Anti-Backdoor learning (NeurIPS 2021)☆84Updated 2 years ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆71Updated 7 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Updated 2 years ago
- [ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable☆171Updated last year
- ☆67Updated last year
- A simple implementation of BadNets on MNIST☆33Updated 6 years ago
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆51Updated 3 years ago
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆28Updated 4 years ago
- A pytorch implementation of "Towards Deep Learning Models Resistant to Adversarial Attacks"☆157Updated 6 years ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆49Updated 4 years ago
- Code for LAS-AT: Adversarial Training with Learnable Attack Strategy (CVPR2022)☆118Updated 3 years ago
- Code for "Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment" (CCS 2019)☆48Updated 5 years ago
- This repository contains Python code for the paper "Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearni…☆19Updated last year
- A pytorch implementation of "Towards Evaluating the Robustness of Neural Networks"☆59Updated 6 years ago
- Code for our NeurIPS 2020 paper Practical No-box Adversarial Attacks against DNNs.☆34Updated 4 years ago