The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749
☆34Mar 28, 2020Updated 5 years ago
Alternatives and similar repositories for Defending-Neural-Backdoors-via-Generative-Distribution-Modeling
Users that are interested in Defending-Neural-Backdoors-via-Generative-Distribution-Modeling are comparing it to the libraries listed below
Sorting:
- ☆27Oct 17, 2022Updated 3 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Nov 11, 2020Updated 5 years ago
- ConvexPolytopePosioning☆37Jan 10, 2020Updated 6 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- ☆26Jan 25, 2019Updated 7 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆62Nov 12, 2024Updated last year
- Pytorch implementation of backdoor unlearning.☆21Jun 8, 2022Updated 3 years ago
- Camouflage poisoning via machine unlearning☆19Jul 3, 2025Updated 8 months ago
- This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks…☆128Jan 18, 2022Updated 4 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Aug 24, 2022Updated 3 years ago
- ☆22Sep 17, 2024Updated last year
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆47Nov 3, 2018Updated 7 years ago
- ☆102Oct 19, 2020Updated 5 years ago
- Official code for the paper "Membership Inference Attacks Against Recommender Systems" (ACM CCS 2021)