kohpangwei / data-poisoning-release
☆32Updated 6 years ago
Related projects ⓘ
Alternatives and complementary repositories for data-poisoning-release
- Interpretation of Neural Network is Fragile☆36Updated 6 months ago
- ☆17Updated 4 years ago
- Interval attacks (adversarial ML)☆21Updated 5 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆31Updated 4 years ago
- ☆11Updated 4 years ago
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆50Updated 4 years ago
- [ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples☆67Updated 2 years ago
- Randomized Smoothing of All Shapes and Sizes (ICML 2020).☆51Updated 4 years ago
- Code release for the ICML 2019 paper "Are generative classifiers more robust to adversarial attacks?"☆23Updated 5 years ago
- Craft poisoned data using MetaPoison☆47Updated 3 years ago
- Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network☆62Updated 5 years ago
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Updated 4 years ago
- ☆26Updated last year
- Code for "Detecting Adversarial Samples from Artifacts" (Feinman et al., 2017)☆108Updated 6 years ago
- Blind Justice Code for the paper "Blind Justice: Fairness with Encrypted Sensitive Attributes", ICML 2018☆14Updated 5 years ago
- ☆22Updated 5 years ago
- "Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers" (NeurIPS 2019, previously called "A Stratified Approach …☆17Updated 5 years ago
- Code corresponding to the paper "Adversarial Examples are not Easily Detected..."☆84Updated 7 years ago
- ☆37Updated 4 years ago
- ☆29Updated 5 years ago
- Codes for reproducing the white-box adversarial attacks in “EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples,” …☆21Updated 6 years ago
- Code for Stability Training with Noise (STN)☆21Updated 3 years ago
- Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing the…☆54Updated last year
- Code and data for the experiments in "On Fairness and Calibration"☆50Updated 2 years ago
- Code for "Differential Privacy Has Disparate Impact on Model Accuracy" NeurIPS'19☆34Updated 3 years ago
- An (imperfect) implementation of wide resnets and Parseval regularization☆8Updated 4 years ago
- Adversarial Examples: Attacks and Defenses for Deep Learning☆31Updated 6 years ago
- TextHide: Tackling Data Privacy in Language Understanding Tasks☆30Updated 3 years ago
- AAAI 2019 oral presentation☆50Updated 3 months ago
- Code for "Prior Convictions: Black-box Adversarial Attacks with Bandits and Priors"☆14Updated 6 years ago