Guo-Yunzhe / Awesome_BackdoorAttack_against_NeuralNetwork
A paper summary of Backdoor Attack against Neural Network
☆13Updated 5 years ago
Alternatives and similar repositories for Awesome_BackdoorAttack_against_NeuralNetwork:
Users that are interested in Awesome_BackdoorAttack_against_NeuralNetwork are comparing it to the libraries listed below
- ☆19Updated 3 years ago
- This is the documentation of the Tensorflow/Keras implementation of Latent Backdoor Attacks. Please see the paper for details Latent Back…☆19Updated 3 years ago
- Code release for DeepJudge (S&P'22)☆50Updated last year
- Code for Machine Learning Models that Remember Too Much (in CCS 2017)☆30Updated 7 years ago
- KNN Defense Against Clean Label Poisoning Attacks☆12Updated 3 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated last year
- This is the implementation of our paper 'Open-sourced Dataset Protection via Backdoor Watermarking', accepted by the NeurIPS Workshop on …☆19Updated 3 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆53Updated 2 months ago
- ☆13Updated 3 years ago
- ☆11Updated last year
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆44Updated 5 years ago
- ☆25Updated 2 years ago
- This is the source code for HufuNet. Our paper is accepted by the IEEE TDSC.☆22Updated last year
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆49Updated 2 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Updated 4 years ago
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆24Updated 2 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆32Updated 4 years ago
- Craft poisoned data using MetaPoison☆49Updated 3 years ago
- ☆19Updated 2 years ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 5 years ago
- ☆17Updated 3 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Updated 5 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆64Updated 3 years ago
- ☆45Updated 3 years ago
- ☆24Updated 2 years ago
- [AAAI'21] Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification☆28Updated 2 weeks ago
- Target Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning☆10Updated 5 years ago
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆27Updated 3 years ago
- ☆10Updated 3 years ago
- ☆45Updated 5 years ago