Huiying-Li / Latent-Backdoor
This is the documentation of the Tensorflow/Keras implementation of Latent Backdoor Attacks. Please see the paper for details Latent Backdoor Attacks on Deep Neural Networks, CCS'19.
☆18Updated 3 years ago
Related projects: ⓘ
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 5 years ago
- Official implementation of (CVPR 2022 Oral) Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks.☆26Updated 2 years ago
- ☆17Updated 2 years ago
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆23Updated 2 years ago
- 使用投毒posion的方式backdoor攻击LeNet-5网络,使用MNIST手写数据集☆12Updated 3 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆47Updated 4 years ago
- ☆19Updated 2 years ago
- Invisible Backdoor Attack with Sample-Specific Triggers☆85Updated 2 years ago
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆23Updated 3 years ago
- Code for "Label-Consistent Backdoor Attacks"☆48Updated 3 years ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆47Updated 5 years ago
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆47Updated 2 years ago
- ☆19Updated 4 years ago
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆25Updated 3 years ago
- ☆41Updated 3 weeks ago
- competition☆17Updated 4 years ago
- ☆24Updated last year
- ☆17Updated 2 years ago
- ☆73Updated 3 years ago
- [AAAI'21] Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification☆28Updated 3 months ago
- ☆12Updated 2 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆49Updated last year
- AdvDoor: Adversarial Backdoor Attack of Deep Learning System☆30Updated last year
- ☆24Updated last year
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆29Updated 3 years ago
- Implementation of Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning paper☆19Updated 4 years ago
- This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks…☆115Updated 2 years ago
- ☆16Updated 3 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆22Updated 4 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆28Updated 2 years ago