mmalekzadeh / honest-but-curious-nets
Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs (ACM CCS'21)
☆17Updated last year
Related projects: ⓘ
- ☆42Updated last year
- ☆22Updated last year
- Universal Robustness Evaluation Toolkit (for Evasion)☆29Updated 6 months ago
- Implementation of our ICLR 2021 paper: Policy-Driven Attack: Learning to Query for Hard-label Black-box Adversarial Examples.☆11Updated 3 years ago
- ☆60Updated 3 years ago
- ☆11Updated 4 years ago
- ☆12Updated 2 years ago
- AI-secure / Uncovering-the-Connections-BetweenAdversarial-Transferability-and-Knowledge-Transferabilitycode for ICML 2021 paper in which we explore the relationship between adversarial transferability and knowledge transferability.☆17Updated last year
- Defending Against Backdoor Attacks Using Robust Covariance Estimation☆20Updated 3 years ago
- Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"☆24Updated 2 years ago
- Code for the paper titled "Adversarial Vulnerability of Randomized Ensembles" (ICML 2022).☆10Updated 2 years ago
- ☆82Updated 10 months ago
- Morphence: An implementation of a moving target defense against adversarial example attacks demonstrated for image classification models …☆15Updated last month
- ☆11Updated 2 years ago
- ☆23Updated last year
- Code for ML Doctor☆84Updated last month
- ☆31Updated 2 weeks ago
- ☆9Updated 3 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆30Updated last year
- Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples☆17Updated 2 years ago
- ☆16Updated 2 years ago
- ☆44Updated 3 years ago
- Athena: A Framework for Defending Machine Learning Systems Against Adversarial Attacks☆41Updated 2 years ago
- [NeurIPS 2021] Source code for the paper "Qu-ANTI-zation: Exploiting Neural Network Quantization for Achieving Adversarial Outcomes"☆12Updated 2 years ago
- ☆25Updated last year
- ☆20Updated last year
- Defending against Model Stealing via Verifying Embedded External Features☆31Updated 2 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆22Updated 4 years ago
- ☆23Updated 2 years ago
- ☆25Updated 5 years ago