How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)
☆14Jul 16, 2021Updated 4 years ago
Alternatives and similar repositories for PoisoningCertifiedDefenses
Users that are interested in PoisoningCertifiedDefenses are comparing it to the libraries listed below
Sorting:
- ☆12Dec 9, 2020Updated 5 years ago
- Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"☆13Aug 22, 2022Updated 3 years ago
- [ICLR 2025] On Evluating the Durability of Safegurads for Open-Weight LLMs☆13Jun 20, 2025Updated 8 months ago
- Code for the paper "Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation" by Alexander Levine and Soheil Feizi.☆10Aug 22, 2022Updated 3 years ago
- Official code for FAccT'21 paper "Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning" https://arxiv.org/abs…☆13Mar 9, 2021Updated 4 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 6 years ago
- A paper summary of Backdoor Attack against Neural Network☆13Aug 9, 2019Updated 6 years ago
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆48Apr 27, 2022Updated 3 years ago
- This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set…☆14Jun 19, 2020Updated 5 years ago
- Code to replicate the Representation Noising paper and tools for evaluating defences against harmful fine-tuning☆23Dec 12, 2024Updated last year
- ☆20May 6, 2022Updated 3 years ago
- ☆83Aug 3, 2021Updated 4 years ago
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆20Apr 27, 2023Updated 2 years ago
- ☆23Sep 21, 2022Updated 3 years ago
- Code for the paper Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers☆60Apr 29, 2022Updated 3 years ago
- ☆24Dec 8, 2024Updated last year
- MACER: MAximizing CErtified Radius (ICLR 2020)☆31Jan 5, 2020Updated 6 years ago
- Craft poisoned data using MetaPoison☆54Apr 5, 2021Updated 4 years ago
- Imagenet dataset for pytorch☆23Jan 15, 2019Updated 7 years ago
- Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"☆26Jan 7, 2022Updated 4 years ago
- Profit Allocation for Federated Learning☆24Apr 27, 2020Updated 5 years ago
- Distributional Shapley: A Distributional Framework for Data Valuation☆30May 1, 2024Updated last year
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆30Jul 11, 2023Updated 2 years ago
- ☆33May 14, 2022Updated 3 years ago
- Automatic defect recognition in X-ray testing using computer vision☆12Dec 8, 2018Updated 7 years ago
- CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)☆74Aug 5, 2021Updated 4 years ago
- Anti-Backdoor learning (NeurIPS 2021)☆84Jul 20, 2023Updated 2 years ago
- Research simulation toolkit for federated learning☆13Nov 7, 2020Updated 5 years ago
- ☆13Oct 11, 2024Updated last year
- Identification of the Adversary from a Single Adversarial Example (ICML 2023)☆10Jul 15, 2024Updated last year
- A Latex Template for a Science Textbook☆10Apr 7, 2020Updated 5 years ago
- This is the code for Addressing Class Imbalance in Federated Learning (AAAI-2021).☆86Dec 9, 2021Updated 4 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- ☆48Sep 29, 2024Updated last year
- ☆44Oct 1, 2024Updated last year
- Code for paper "Interpret Federated Learning with Shapley Values"☆40May 18, 2019Updated 6 years ago
- A method for training neural networks that are provably robust to adversarial attacks. [IJCAI 2019]☆10Sep 3, 2019Updated 6 years ago
- Code used to produce experimental results for the paper "Deep Structured Prediction with Nonlinear Output Activations"☆11May 6, 2019Updated 6 years ago
- Attacks using out-of-distribution adversarial examples☆11Nov 19, 2019Updated 6 years ago