Official implementation of (CVPR 2022 Oral) Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks.
☆26Jul 3, 2025Updated 8 months ago
Alternatives and similar repositories for Subnet-Replacement-Attack
Users that are interested in Subnet-Replacement-Attack are comparing it to the libraries listed below
Sorting:
- ☆20Aug 7, 2023Updated 2 years ago
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆63May 8, 2023Updated 2 years ago
- [PyTorch Implementation] Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Feb 27, 2021Updated 5 years ago
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆135Nov 11, 2024Updated last year
- A Pytroch Implementation of Some Backdoor Attack Algorithms, Including BadNets, SIG, FIBA, FTrojan ...☆22Dec 7, 2024Updated last year
- Disguising Attacks with Explanation-Aware Backdoors (IEEE S&P 2023)☆11Jan 3, 2026Updated 2 months ago
- Applying backdoor attacks to BadNet on MNIST and ResNet on CIFAR10.☆13Aug 25, 2021Updated 4 years ago
- ☆10Oct 31, 2022Updated 3 years ago
- ☆26Jan 11, 2023Updated 3 years ago
- Backdoor Cleansing with Unlabeled Data (CVPR 2023)☆12Apr 6, 2023Updated 2 years ago
- This is the implementation for IEEE S&P 2022 paper "Model Orthogonalization: Class Distance Hardening in Neural Networks for Better Secur…☆11Aug 24, 2022Updated 3 years ago
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆24Apr 5, 2022Updated 3 years ago
- ☆27Nov 9, 2022Updated 3 years ago
- ☆584Jul 4, 2025Updated 7 months ago
- Anti-Backdoor learning (NeurIPS 2021)☆84Jul 20, 2023Updated 2 years ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆35Jan 9, 2023Updated 3 years ago
- Code for paper 'FIBA: Frequency-Injection based Backdoor Attack in Medical Image Analysis'☆38Sep 12, 2022Updated 3 years ago
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆15Jan 13, 2023Updated 3 years ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆47Nov 3, 2018Updated 7 years ago
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆38Jul 22, 2024Updated last year
- ☆19Mar 26, 2022Updated 3 years ago
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆48Apr 27, 2022Updated 3 years ago
- ☆20May 6, 2022Updated 3 years ago
- ☆22Sep 16, 2022Updated 3 years ago
- This is the documentation of the Tensorflow/Keras implementation of Latent Backdoor Attacks. Please see the paper for details Latent Back…☆22Sep 8, 2021Updated 4 years ago
- This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks…☆128Jan 18, 2022Updated 4 years ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆18May 13, 2019Updated 6 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Nov 16, 2022Updated 3 years ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆60Dec 11, 2024Updated last year
- Multi-metrics adaptively identifies backdoors in Federated learning☆37Aug 7, 2025Updated 6 months ago
- Code implementation of the paper "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and P…☆314Feb 28, 2020Updated 6 years ago
- AdvDoor: Adversarial Backdoor Attack of Deep Learning System☆32Nov 5, 2024Updated last year
- Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems☆28Apr 1, 2021Updated 4 years ago
- [AAAI'21] Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification☆29Dec 31, 2024Updated last year
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆36Oct 29, 2025Updated 4 months ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆35Oct 3, 2022Updated 3 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆83Apr 1, 2023Updated 2 years ago
- ☆34Jun 27, 2022Updated 3 years ago
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆236Mar 15, 2024Updated last year