How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)
☆14Jul 16, 2021Updated 4 years ago
Alternatives and similar repositories for PoisoningCertifiedDefenses
Users that are interested in PoisoningCertifiedDefenses are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆12Dec 9, 2020Updated 5 years ago
- Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"☆13Aug 22, 2022Updated 3 years ago
- [ICLR 2025] On Evluating the Durability of Safegurads for Open-Weight LLMs☆13Jun 20, 2025Updated 9 months ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 6 years ago
- Official code for FAccT'21 paper "Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning" https://arxiv.org/abs…☆13Mar 9, 2021Updated 5 years ago
- Code for the paper "Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation" by Alexander Levine and Soheil Feizi.☆10Aug 22, 2022Updated 3 years ago
- Code for the paper Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers☆60Apr 29, 2022Updated 3 years ago
- ☆83Aug 3, 2021Updated 4 years ago
- ICCV 2021, We find most existing triggers of backdoor attacks in deep learning contain severe artifacts in the frequency domain. This Rep…☆48Apr 27, 2022Updated 3 years ago
- ☆23Sep 21, 2022Updated 3 years ago
- A paper summary of Backdoor Attack against Neural Network☆13Aug 9, 2019Updated 6 years ago
- [ICLR 2022] Boosting Randomized Smoothing with Variance Reduced Classifiers☆11Mar 29, 2022Updated 3 years ago
- ☆20May 6, 2022Updated 3 years ago
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆14Jun 21, 2024Updated last year
- ☆24Dec 8, 2024Updated last year
- Code to replicate the Representation Noising paper and tools for evaluating defences against harmful fine-tuning☆24Dec 12, 2024Updated last year
- MACER: MAximizing CErtified Radius (ICLR 2020)☆31Jan 5, 2020Updated 6 years ago
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆20Apr 27, 2023Updated 2 years ago
- Certified Object Detection with Randomized Median Smoothing☆12Oct 21, 2020Updated 5 years ago
- This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set…☆14Jun 19, 2020Updated 5 years ago
- Distributional Shapley: A Distributional Framework for Data Valuation☆30May 1, 2024Updated last year
- Craft poisoned data using MetaPoison☆54Apr 5, 2021Updated 4 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- Profit Allocation for Federated Learning☆25Apr 27, 2020Updated 5 years ago
- Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"☆26Jan 7, 2022Updated 4 years ago
- The code of paper "Toward Optimal LLM Alignments Using Two-Player Games".☆17Jun 20, 2024Updated last year
- ☆16Jul 17, 2022Updated 3 years ago
- 毕设,分层自嵌入数字水印☆12Sep 2, 2019Updated 6 years ago
- AutoLR: Layer-wise Pruning and Auto-tuning of Learning Rates in Fine-tuning of Deep Networks☆17Jan 27, 2021Updated 5 years ago
- ☆14Feb 26, 2025Updated last year
- ☆12Jul 8, 2023Updated 2 years ago
- [ACL 2024] ValueBench: Towards Comprehensively Evaluating Value Orientations and Understanding of Large Language Models☆25Jan 11, 2025Updated last year
- [NeurIPS 2025@FoRLM] R1-Compress: Long Chain-of-Thought Compression via Chunk Compression and Search☆17Jan 24, 2026Updated 2 months ago
- Imagenet dataset for pytorch☆23Jan 15, 2019Updated 7 years ago
- ☆48Sep 29, 2024Updated last year
- ☆44Oct 1, 2024Updated last year
- Defending Against Backdoor Attacks Using Robust Covariance Estimation☆22Jul 12, 2021Updated 4 years ago
- Identification of the Adversary from a Single Adversarial Example (ICML 2023)☆10Jul 15, 2024Updated last year
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆30Jul 11, 2023Updated 2 years ago