yziqi / adversarial-model-inversionView external linksLinks
Code for "Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment" (CCS 2019)
☆48Dec 17, 2019Updated 6 years ago
Alternatives and similar repositories for adversarial-model-inversion
Users that are interested in adversarial-model-inversion are comparing it to the libraries listed below
Sorting:
- Code for "Variational Model Inversion Attacks" Wang et al., NeurIPS2021☆22Dec 10, 2021Updated 4 years ago
- ☆27Sep 23, 2022Updated 3 years ago
- ☆11Nov 10, 2020Updated 5 years ago
- Implementation of the Model Inversion Attack introduced with Model Inversion Attacks that Exploit Confidence Information and Basic Counte…☆85Feb 26, 2023Updated 2 years ago
- ☆45Sep 24, 2023Updated 2 years ago
- Processed datasets that we have used in our research☆14Apr 30, 2020Updated 5 years ago
- A Pytorch implementation of "Data-Free Learning of Student Networks" (ICCV 2019).☆18Oct 8, 2019Updated 6 years ago
- Code for "Differential Privacy Has Disparate Impact on Model Accuracy" NeurIPS'19☆33May 18, 2021Updated 4 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆68Sep 11, 2021Updated 4 years ago
- ☆22Aug 15, 2022Updated 3 years ago
- Code for NDSS 2022 paper "MIRROR: Model Inversion for Deep Learning Network with High Fidelity"☆27May 9, 2023Updated 2 years ago
- ☆25Jan 20, 2019Updated 7 years ago
- ☆45Nov 10, 2019Updated 6 years ago
- ☆12Mar 25, 2020Updated 5 years ago
- [NeurIPS 2019] Deep Leakage From Gradients☆474Apr 17, 2022Updated 3 years ago
- A repo to download and preprocess the Purchase100 dataset extracted from Kaggle: Acquire Valued Shoppers Challenge☆12Jun 21, 2021Updated 4 years ago
- Code for Machine Learning Models that Remember Too Much (in CCS 2017)☆31Oct 15, 2017Updated 8 years ago
- Research into model inversion on SplitNN☆18Feb 20, 2024Updated last year
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆85Nov 22, 2021Updated 4 years ago
- PyTorch implementation of Security-Preserving Federated Learning via Byzantine-Sensitive Triplet Distance☆34Oct 11, 2024Updated last year
- Breaking Certifiable Defenses☆17Nov 22, 2022Updated 3 years ago
- Universal Adversarial Networks☆32Jul 30, 2018Updated 7 years ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆33Nov 4, 2020Updated 5 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Nov 11, 2020Updated 5 years ago
- The code for our Updates-Leak paper☆17Jul 23, 2020Updated 5 years ago
- Public implementation of ICML'19 paper "White-box vs Black-box: Bayes Optimal Strategies for Membership Inference"☆18May 28, 2020Updated 5 years ago
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆39Jan 28, 2019Updated 7 years ago
- TIPRDC: Task-Independent Privacy-Respecting Data Crowdsourcing Framework for Deep Learning with Anonymized Intermediate Representations☆20Dec 27, 2020Updated 5 years ago
- A TensorFlow Implementation of Punctuation Restoration.☆18Nov 9, 2020Updated 5 years ago
- [ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks" and "Be C…☆46Jul 18, 2025Updated 6 months ago
- ☆19Mar 6, 2023Updated 2 years ago
- FLTracer: Accurate Poisoning Attack Provenance in Federated Learning☆24Jun 14, 2024Updated last year
- Task-agnostic universal black-box attacks on computer vision neural network via procedural noise (CCS'19)☆56Dec 21, 2020Updated 5 years ago
- Code implementation of the paper "With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning", at USENIX …☆19Nov 28, 2018Updated 7 years ago
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆37Jul 22, 2024Updated last year
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆58Mar 20, 2024Updated last year
- Code used in 'Exploring the Space of Black-box Attacks on Deep Neural Networks' (https://arxiv.org/abs/1712.09491)☆61Feb 25, 2018Updated 7 years ago
- Script to download and annotate images from VGG Faces dataset☆26Nov 4, 2021Updated 4 years ago
- Official Repository for ResSFL (accepted by CVPR '22)☆26Jun 24, 2022Updated 3 years ago