Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"
☆13Aug 22, 2022Updated 3 years ago
Alternatives and similar repositories for DPA
Users that are interested in DPA are comparing it to the libraries listed below
Sorting:
- ☆12Dec 9, 2020Updated 5 years ago
- KNN Defense Against Clean Label Poisoning Attacks☆13Sep 24, 2021Updated 4 years ago
- [Preprint] Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis☆10Sep 23, 2021Updated 4 years ago
- How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)☆14Jul 16, 2021Updated 4 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- ☆22Sep 17, 2024Updated last year
- ☆19Jun 21, 2021Updated 4 years ago
- Defending Against Backdoor Attacks Using Robust Covariance Estimation☆22Jul 12, 2021Updated 4 years ago
- This work corroborates a run-time Trojan detection method exploiting STRong Intentional Perturbation of inputs, is a multi-domain Trojan …☆10Mar 7, 2021Updated 4 years ago
- Official code for FAccT'21 paper "Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning" https://arxiv.org/abs…☆13Mar 9, 2021Updated 4 years ago
- Codes for the ICLR 2022 paper: Trigger Hunting with a Topological Prior for Trojan Detection☆11Sep 19, 2023Updated 2 years ago
- ☆11Jan 25, 2022Updated 4 years ago
- [NeurIPS'22] Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork. Haotao Wang, Junyuan Hong,…☆15Nov 27, 2023Updated 2 years ago
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆30Jul 11, 2023Updated 2 years ago
- Code for Boosting fast adversarial training with learnable adversarial initialization (TIP2022)☆29Aug 22, 2023Updated 2 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 6 years ago
- Craft poisoned data using MetaPoison☆54Apr 5, 2021Updated 4 years ago
- This repository is the implementation of Deep Dirichlet Process Mixture Models (UAI 2022)☆15May 19, 2022Updated 3 years ago
- Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks☆38May 25, 2021Updated 4 years ago
- Code for "Label-Consistent Backdoor Attacks"☆57Nov 22, 2020Updated 5 years ago
- ☆16Dec 3, 2021Updated 4 years ago
- competition☆17Aug 1, 2020Updated 5 years ago
- ☆19Mar 26, 2022Updated 3 years ago
- ☆18Jun 15, 2021Updated 4 years ago
- TextGuard: Provable Defense against Backdoor Attacks on Text Classification☆13Nov 7, 2023Updated 2 years ago
- Morphence: An implementation of a moving target defense against adversarial example attacks demonstrated for image classification models …☆23Aug 9, 2024Updated last year
- Code for LAS-AT: Adversarial Training with Learnable Attack Strategy (CVPR2022)☆118Mar 30, 2022Updated 3 years ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆112Aug 19, 2024Updated last year
- The HyFed framework provides an easy-to-use API to develop federated, privacy-preserving machine learning algorithms.☆18Sep 10, 2025Updated 5 months ago
- ☆18Nov 13, 2021Updated 4 years ago
- Anupam Datta, Matt Fredrikson, Klas Leino, Kaiji Lu, Shayak Sen, Zifan Wang☆18Feb 23, 2021Updated 5 years ago
- Code Implementation for Traceback of Data Poisoning Attacks in Neural Networks☆20Aug 15, 2022Updated 3 years ago
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Oct 8, 2020Updated 5 years ago
- Data-Efficient Backdoor Attacks☆20Jun 15, 2022Updated 3 years ago
- This technique modifies image data so that any model trained on it will bear an identifiable mark.☆44Aug 13, 2021Updated 4 years ago
- ☆20May 6, 2022Updated 3 years ago
- Anti-Backdoor learning (NeurIPS 2021)☆84Jul 20, 2023Updated 2 years ago
- ☆20Mar 14, 2022Updated 3 years ago
- ☆18Jul 10, 2022Updated 3 years ago