Sanghyun-Hong / Gradient-Shaping
[Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
☆10Updated 4 years ago
Related projects ⓘ
Alternatives and complementary repositories for Gradient-Shaping
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated last year
- ☆9Updated 3 years ago
- ☆19Updated last year
- KNN Defense Against Clean Label Poisoning Attacks☆11Updated 3 years ago
- Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"☆11Updated 2 years ago
- ☆23Updated 2 years ago
- Pytorch implementation of backdoor unlearning.☆16Updated 2 years ago
- ☆19Updated 3 years ago
- ☆19Updated 2 months ago
- ☆13Updated 2 years ago
- verifying machine unlearning by backdooring☆18Updated last year
- How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)☆12Updated 3 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆31Updated 2 years ago
- ☆18Updated 6 months ago
- ☆25Updated 5 years ago
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆26Updated 4 years ago
- ☆23Updated last year
- Camouflage poisoning via machine unlearning☆15Updated last year
- ☆16Updated 3 years ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆28Updated 10 months ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆24Updated this week
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆31Updated 4 years ago
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆18Updated last year
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Updated 5 years ago
- ☆23Updated 2 years ago
- ☆32Updated 2 months ago
- Public implementation of the paper "On the Importance of Difficulty Calibration in Membership Inference Attacks".☆14Updated 2 years ago
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆24Updated 2 years ago
- Not All Poisons are Created Equal: Robust Training against Data Poisoning (ICML 2022)☆17Updated 2 years ago
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Updated 4 years ago