bolunwang / translearn
Code implementation of the paper "With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning", at USENIX Security 2018
☆21Updated 5 years ago
Related projects: ⓘ
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆44Updated 4 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆31Updated 4 years ago
- ☆25Updated 5 years ago
- A rich-documented PyTorch implementation of Carlini-Wagner's L2 attack.☆59Updated 6 years ago
- ☆62Updated 5 years ago
- Code for Stability Training with Noise (STN)☆21Updated 3 years ago
- AAAI 2019 oral presentation☆49Updated last month
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆38Updated 5 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated 11 months ago
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆47Updated last year
- code we used in Decision Boundary Analysis of Adversarial Examples https://openreview.net/forum?id=BkpiPMbA-☆27Updated 5 years ago
- Craft poisoned data using MetaPoison☆47Updated 3 years ago
- CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is a robustness metric for deep neural networks☆60Updated 3 years ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆67Updated 6 years ago
- ☆11Updated 4 years ago
- Code for CVPR2020 paper QEBA: Query-Efficient Boundary-Based Blackbox Attack☆30Updated 3 years ago
- Interval attacks (adversarial ML)☆21Updated 5 years ago
- ☆53Updated last year
- This repository is for NeurIPS 2018 spotlight paper "Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples."☆31Updated 2 years ago
- Adversarial Defense for Ensemble Models (ICML 2019)☆60Updated 3 years ago
- Code for Machine Learning Models that Remember Too Much (in CCS 2017)☆30Updated 6 years ago
- ☆25Updated 5 years ago
- Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"☆52Updated 3 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆22Updated 4 years ago
- ☆46Updated 3 years ago
- Repository for Certified Defenses for Adversarial Patch ICLR-2020☆32Updated 4 years ago
- Code for the unrestricted adversarial examples paper (NeurIPS 2018)☆63Updated 5 years ago
- This repository contains implementation of 4 adversarial attacks : FGSM, Basic Iterative Method, Projected Gradient Descent(Madry's Attac…☆31Updated 5 years ago
- Feature Scattering Adversarial Training (NeurIPS19)☆71Updated 3 months ago
- Implementation of the Model Inversion Attack introduced with Model Inversion Attacks that Exploit Confidence Information and Basic Counte…☆79Updated last year