The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749
☆34Mar 28, 2020Updated 5 years ago
Alternatives and similar repositories for Defending-Neural-Backdoors-via-Generative-Distribution-Modeling
Users that are interested in Defending-Neural-Backdoors-via-Generative-Distribution-Modeling are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Codes for the ICLR 2022 paper: Trigger Hunting with a Topological Prior for Trojan Detection☆11Sep 19, 2023Updated 2 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Nov 11, 2020Updated 5 years ago
- ConvexPolytopePosioning☆37Jan 10, 2020Updated 6 years ago
- This is for releasing the source code of the ACSAC paper "STRIP: A Defence Against Trojan Attacks on Deep Neural Networks"☆62Nov 12, 2024Updated last year
- Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (RAID 2018)☆47Nov 3, 2018Updated 7 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆27Apr 29, 2020Updated 5 years ago
- ☆26Jan 25, 2019Updated 7 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Aug 24, 2022Updated 3 years ago
- ☆22Sep 17, 2024Updated last year
- Code implementation of the paper "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and P…☆313Feb 28, 2020Updated 6 years ago
- This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks…☆127Jan 18, 2022Updated 4 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- Anti-Backdoor learning (NeurIPS 2021)☆84Jul 20, 2023Updated 2 years ago
- ☆102Oct 19, 2020Updated 5 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Official code for the paper "Membership Inference Attacks Against Recommender Systems" (ACM CCS 2021)☆21Oct 8, 2024Updated last year
- Official Repository for the CVPR 2020 paper "Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs"☆45Oct 24, 2023Updated 2 years ago
- Camouflage poisoning via machine unlearning☆19Jul 3, 2025Updated 8 months ago
- Pytorch implementation of backdoor unlearning.☆21Jun 8, 2022Updated 3 years ago
- Repository that contains the code for the paper titled, 'Unifying Distillation with Personalization in Federated Learning'.☆13May 31, 2021Updated 4 years ago
- Trojan Attack on Neural Network☆190Mar 25, 2022Updated 4 years ago
- A minimal PyTorch implementation of Label-Consistent Backdoor Attacks☆28Feb 8, 2021Updated 5 years ago
- A repository to quickly generate synthetic data and associated trojaned deep learning models☆84Jun 12, 2023Updated 2 years ago
- ☆68Sep 29, 2020Updated 5 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Nov 16, 2022Updated 3 years ago
- ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation☆51Jun 1, 2022Updated 3 years ago
- Sparse and Imperceivable Adversarial Attacks (accepted to ICCV 2019).☆43Nov 8, 2020Updated 5 years ago
- Official code for FAccT'21 paper "Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning" https://arxiv.org/abs…☆13Mar 9, 2021Updated 5 years ago
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Oct 8, 2020Updated 5 years ago
- Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"☆26Jan 7, 2022Updated 4 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆31Oct 10, 2022Updated 3 years ago
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆24Apr 5, 2022Updated 3 years ago
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆38Jul 22, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆13Jul 26, 2021Updated 4 years ago
- Code for the IEEE S&P 2018 paper 'Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning'☆55Mar 24, 2021Updated 5 years ago
- Watermarking Deep Neural Networks (USENIX 2018)☆100Sep 2, 2020Updated 5 years ago
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆37Jan 28, 2019Updated 7 years ago
- Official PyTorch Implementation for Continual Learning and Private Unlearning☆18Jul 19, 2022Updated 3 years ago
- SPATL: Salient Prameter Aggregation and Transfer Learning for Heterogeneous Federated Learning☆23Nov 17, 2022Updated 3 years ago
- A list of backdoor learning resources☆1,162Jul 31, 2024Updated last year