mireshghallah / cloak-www-21Links
Code for the WWW21 paper "Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy"
☆11Updated 4 years ago
Alternatives and similar repositories for cloak-www-21
Users that are interested in cloak-www-21 are comparing it to the libraries listed below
Sorting:
- Further improve robustness of mixup-trained models in inference (ICLR 2020)☆60Updated 5 years ago
- Code for the paper "Adversarial Neural Pruning with Latent Vulnerability Suppression"☆15Updated 2 years ago
- InstaHide: Instance-hiding Schemes for Private Distributed Learning☆50Updated 4 years ago
- Adversarial Defense for Ensemble Models (ICML 2019)☆61Updated 4 years ago
- Differentially Private Optimization for PyTorch 👁🙅♀️☆186Updated 5 years ago
- Semisupervised learning for adversarial robustness https://arxiv.org/pdf/1905.13736.pdf☆142Updated 5 years ago
- Official TensorFlow Implementation of Adversarial Training for Free! which trains robust models at no extra cost compared to natural trai…☆176Updated last year
- ☆30Updated 3 years ago
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆37Updated 2 years ago
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆47Updated 2 years ago
- Feature Scattering Adversarial Training (NeurIPS19)☆73Updated last year
- Code implementation of the paper "With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning", at USENIX …☆20Updated 6 years ago
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆37Updated 6 years ago
- ConvexPolytopePosioning☆35Updated 5 years ago
- CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is a robustness metric for deep neural networks☆61Updated 3 years ago
- Max Mahalanobis Training (ICML 2018 + ICLR 2020)☆90Updated 4 years ago
- Code for Auditing DPSGD☆37Updated 3 years ago
- Code and checkpoints of compressed networks for the paper titled "HYDRA: Pruning Adversarially Robust Neural Networks" (NeurIPS 2020) (ht…☆92Updated 2 years ago
- ☆26Updated 6 years ago
- Code for "Neuron Shapley: Discovering the Responsible Neurons"☆26Updated last year
- Code for "Differential Privacy Has Disparate Impact on Model Accuracy" NeurIPS'19☆34Updated 4 years ago
- Certified Removal from Machine Learning Models☆67Updated 3 years ago
- Salvaging Federated Learning by Local Adaptation☆56Updated 11 months ago
- Official implementation for paper: A New Defense Against Adversarial Images: Turning a Weakness into a Strength☆38Updated 5 years ago
- ☆13Updated 5 years ago
- code we used in Decision Boundary Analysis of Adversarial Examples https://openreview.net/forum?id=BkpiPMbA-☆27Updated 6 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆44Updated 5 years ago
- ☆19Updated 2 years ago
- Code for the paper: Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization (https://arxiv.org/abs/2…☆23Updated 4 years ago
- Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks, in ICCV 2019☆58Updated 5 years ago