superrrpotato / Defending-Neural-Backdoors-via-Generative-Distribution-ModelingView external linksLinks
The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749
☆34Mar 28, 2020Updated 5 years ago
Alternatives and similar repositories for Defending-Neural-Backdoors-via-Generative-Distribution-Modeling
Users that are interested in Defending-Neural-Backdoors-via-Generative-Distribution-Modeling are comparing it to the libraries listed below
Sorting:
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Nov 11, 2020Updated 5 years ago
- ConvexPolytopePosioning☆37Jan 10, 2020Updated 6 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- ☆26Jan 25, 2019Updated 7 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆46Nov 25, 2019Updated 6 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆61Nov 12, 2024Updated last year
- Camouflage poisoning via machine unlearning☆19Jul 3, 2025Updated 7 months ago
- This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks…☆128Jan 18, 2022Updated 4 years ago
- ☆22Sep 17, 2024Updated last year
- Code for identifying natural backdoors in existing image datasets.☆15Aug 24, 2022Updated 3 years ago
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆47Nov 3, 2018Updated 7 years ago
- Official code for the paper "Membership Inference Attacks Against Recommender Systems" (ACM CCS 2021)☆20Oct 8, 2024Updated last year
- Code implementation of the paper "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and P…☆314Feb 28, 2020Updated 5 years ago
- Official Repository for the CVPR 2020 paper "Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs"☆44Oct 24, 2023Updated 2 years ago
- Code for "Can We Characterize Tasks Without Labels or Features?" (CVPR 2021)☆11Aug 31, 2021Updated 4 years ago
- Official code for FAccT'21 paper "Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning" https://arxiv.org/abs…☆13Mar 9, 2021Updated 4 years ago
- Repository that contains the code for the paper titled, 'Unifying Distillation with Personalization in Federated Learning'.☆13May 31, 2021Updated 4 years ago
- Trojan Attack on Neural Network☆191Mar 25, 2022Updated 3 years ago
- This is the official implementation of the ICML 2023 paper - Can Forward Gradient Match Backpropagation ?☆13May 31, 2023Updated 2 years ago
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆27Apr 29, 2020Updated 5 years ago
- Anti-Backdoor learning (NeurIPS 2021)☆83Jul 20, 2023Updated 2 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Oct 10, 2022Updated 3 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Nov 16, 2022Updated 3 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 5 years ago
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆51Jun 1, 2022Updated 3 years ago
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆84Jun 12, 2023Updated 2 years ago
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆29Feb 8, 2021Updated 5 years ago
- Craft poisoned data using MetaPoison☆54Apr 5, 2021Updated 4 years ago
- Code for the IEEE S&P 2018 paper 'Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning'☆55Mar 24, 2021Updated 4 years ago
- ☆17Feb 21, 2020Updated 5 years ago
- ☆68Sep 29, 2020Updated 5 years ago
- Reference implementation of the PRADA model stealing defense. IEEE Euro S&P 2019.☆35Mar 20, 2019Updated 6 years ago
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆37Jul 22, 2024Updated last year
- Code for "Imitation Attacks and Defenses for Black-box Machine Translations Systems"☆35May 1, 2020Updated 5 years ago
- This is the official repository for our NeurIPS'22 paper "Watermarking for Out-of-distribution Detection."☆18Feb 24, 2023Updated 2 years ago
- Caffe code for the paper "Adversarial Manipulation of Deep Representations"☆17Nov 6, 2017Updated 8 years ago
- Watermarking Deep Neural Networks (USENIX 2018)☆101Sep 2, 2020Updated 5 years ago
- A unified benchmark problem for data poisoning attacks☆161Oct 4, 2023Updated 2 years ago
- Official implementation of the CVPR 2022 paper "Backdoor Attacks on Self-Supervised Learning".☆76Oct 24, 2023Updated 2 years ago