mireshghallah / cloak-www-21Links
Code for the WWW21 paper "Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy"
☆11Updated 4 years ago
Alternatives and similar repositories for cloak-www-21
Users that are interested in cloak-www-21 are comparing it to the libraries listed below
Sorting:
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆39Updated 6 years ago
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆47Updated 3 years ago
- CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is a robustness metric for deep neural networks☆63Updated 4 years ago
- ☆26Updated 6 years ago
- ConvexPolytopePosioning☆37Updated 5 years ago
- Code for "Differential Privacy Has Disparate Impact on Model Accuracy" NeurIPS'19☆33Updated 4 years ago
- InstaHide: Instance-hiding Schemes for Private Distributed Learning☆50Updated 5 years ago
- ☆57Updated 2 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆46Updated 6 years ago
- Code for the paper "Adversarial Neural Pruning with Latent Vulnerability Suppression"☆15Updated 3 years ago
- ☆19Updated 2 years ago
- Semisupervised learning for adversarial robustness https://arxiv.org/pdf/1905.13736.pdf☆141Updated 5 years ago
- Understanding Catastrophic Overfitting in Single-step Adversarial Training [AAAI 2021]☆28Updated 3 years ago
- ☆26Updated 6 years ago
- ☆10Updated 3 years ago
- Feature Scattering Adversarial Training (NeurIPS19)☆74Updated last year
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Updated 3 years ago
- Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks☆39Updated 4 years ago
- Certified Removal from Machine Learning Models☆69Updated 4 years ago
- Code and checkpoints of compressed networks for the paper titled "HYDRA: Pruning Adversarially Robust Neural Networks" (NeurIPS 2020) (ht…☆91Updated 2 years ago
- ☆31Updated 4 years ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆111Updated last year
- ☆80Updated 3 years ago
- Adversarial Defense for Ensemble Models (ICML 2019)☆61Updated 5 years ago
- Craft poisoned data using MetaPoison☆54Updated 4 years ago
- Official TensorFlow Implementation of Adversarial Training for Free! which trains robust models at no extra cost compared to natural trai…☆177Updated last year
- Max Mahalanobis Training (ICML 2018 + ICLR 2020)☆90Updated 5 years ago
- Code for Auditing DPSGD☆37Updated 3 years ago
- ☆27Updated 3 years ago
- code we used in Decision Boundary Analysis of Adversarial Examples https://openreview.net/forum?id=BkpiPMbA-☆29Updated 7 years ago