kohpangwei / data-poisoning-journal-release
☆17Updated 3 years ago
Related projects: ⓘ
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Updated 4 years ago
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Updated 3 years ago
- ☆25Updated 5 years ago
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆26Updated 4 years ago
- ☆15Updated 4 years ago
- Defending Against Backdoor Attacks Using Robust Covariance Estimation☆20Updated 3 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆45Updated 2 years ago
- ☆13Updated 3 years ago
- Code for the paper "MMA Training: Direct Input Space Margin Maximization through Adversarial Training"☆34Updated 4 years ago
- "Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning" by Chongyu Fan*, Jiancheng Liu*, Alfred Hero, Sijia Liu☆14Updated 2 months ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆30Updated last year
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆52Updated last year
- ☆23Updated last year
- Code for "Neuron Shapley: Discovering the Responsible Neurons"☆24Updated 4 months ago
- Official Inplementation of CVPR23 paper "Backdoor Defense via Deconfounded Representation Learning"☆24Updated last year
- TextHide: Tackling Data Privacy in Language Understanding Tasks☆30Updated 3 years ago
- ☆53Updated last year
- Certified Removal from Machine Learning Models☆62Updated 3 years ago
- ☆23Updated 3 months ago
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆15Updated last year
- ☆31Updated 2 weeks ago
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆26Updated 9 months ago
- Craft poisoned data using MetaPoison☆47Updated 3 years ago
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆49Updated last year
- Code for "Label-Consistent Backdoor Attacks"☆48Updated 3 years ago
- On the effectiveness of adversarial training against common corruptions [UAI 2022]☆30Updated 2 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated 11 months ago
- Adversarial Defense for Ensemble Models (ICML 2019)☆60Updated 3 years ago
- Code for the paper "Rethinking Stealthiness of Backdoor Attack against NLP Models" (ACL-IJCNLP 2021)☆21Updated 2 years ago
- ☆27Updated 2 years ago