Hazelsuko07 / InstaHideLinks
InstaHide: Instance-hiding Schemes for Private Distributed Learning
☆50Updated 5 years ago
Alternatives and similar repositories for InstaHide
Users that are interested in InstaHide are comparing it to the libraries listed below
Sorting:
- ☆24Updated 3 years ago
- Code and checkpoints of compressed networks for the paper titled "HYDRA: Pruning Adversarially Robust Neural Networks" (NeurIPS 2020) (ht…☆91Updated 3 years ago
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆37Updated 2 years ago
- [CVPR 2021] Scalability vs. Utility: Do We Have to Sacrifice One for the Other in Data Importance Quantification?☆33Updated 5 years ago
- ☆27Updated 3 years ago
- [ICLR 2022] "Sparsity Winning Twice: Better Robust Generalization from More Efficient Training" by Tianlong Chen*, Zhenyu Zhang*, Pengjun…☆40Updated 3 years ago
- FedDANE: A Federated Newton-Type Method (Asilomar Conference on Signals, Systems, and Computers ‘19)☆26Updated 2 years ago
- ☆80Updated 3 years ago
- [ICLR 2021] "Robust Overfitting may be mitigated by properly learned smoothening" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, Shiyu Chan…☆49Updated 4 years ago
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆39Updated 6 years ago
- [NeurIPS'20 Oral] DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles☆55Updated 3 years ago
- A Closer Look at Accuracy vs. Robustness☆88Updated 4 years ago
- Understanding and Improving Fast Adversarial Training [NeurIPS 2020]☆96Updated 4 years ago
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Updated 3 years ago
- Code for "Differential Privacy Has Disparate Impact on Model Accuracy" NeurIPS'19☆33Updated 4 years ago
- [NeurIPS 2019] This is the code repo of our novel passport-based DNN ownership verification schemes, i.e. we embed passport layer into va…☆84Updated 2 years ago
- Adversarial Defense for Ensemble Models (ICML 2019)☆61Updated 5 years ago
- Further improve robustness of mixup-trained models in inference (ICLR 2020)☆60Updated 5 years ago
- Certified Removal from Machine Learning Models☆69Updated 4 years ago
- ☆32Updated last year
- ☆15Updated 2 years ago
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆56Updated 6 years ago
- Smooth Adversarial Training☆68Updated 5 years ago
- Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs☆100Updated 4 years ago
- ☆10Updated 3 years ago
- Private Adaptive Optimization with Side Information (ICML '22)☆16Updated 3 years ago
- ☆19Updated 4 years ago
- pytorch implementation of Parametric Noise Injection for adversarial defense☆45Updated 6 years ago
- The code for "Improved Deep Leakage from Gradients" (iDLG).☆164Updated 4 years ago
- kyleliang919 / Uncovering-the-Connections-BetweenAdversarial-Transferability-and-Knowledge-Transferabilitycode for ICML 2021 paper in which we explore the relationship between adversarial transferability and knowledge transferability.☆17Updated 3 years ago