Code for "Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment" (CCS 2019)
☆49Dec 17, 2019Updated 6 years ago
Alternatives and similar repositories for adversarial-model-inversion
Users that are interested in adversarial-model-inversion are comparing it to the libraries listed below
Sorting:
- Code for "Variational Model Inversion Attacks" Wang et al., NeurIPS2021☆22Dec 10, 2021Updated 4 years ago
- ☆27Sep 23, 2022Updated 3 years ago
- ☆46Sep 24, 2023Updated 2 years ago
- An awesome list of papers on privacy attacks against machine learning☆634Mar 18, 2024Updated last year
- Dataset of 475000 faces from 530 faces (50x50 color) from facescrub☆19Aug 30, 2019Updated 6 years ago
- A Pytorch implementation of "Data-Free Learning of Student Networks" (ICCV 2019).☆18Oct 8, 2019Updated 6 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆68Sep 11, 2021Updated 4 years ago
- Code for paper: "RemovalNet: DNN model fingerprinting removal attack", IEEE TDSC 2023.☆10Nov 27, 2023Updated 2 years ago
- ☆22Aug 15, 2022Updated 3 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆46Nov 25, 2019Updated 6 years ago
- CCS 2023 | Explainable malware and vulnerability detection with XAI in paper "FINER: Enhancing State-of-the-art Classifiers with Feature …☆11Aug 20, 2024Updated last year
- ☆25Jan 20, 2019Updated 7 years ago
- Code for NDSS 2022 paper "MIRROR: Model Inversion for Deep Learning Network with High Fidelity"☆27May 9, 2023Updated 2 years ago
- ☆45Nov 10, 2019Updated 6 years ago
- Implementation of membership inference and model inversion attacks, extracting training data information from an ML model. Benchmarking …☆103Nov 2, 2019Updated 6 years ago
- Reconstructing the content of image based on paper "Understanding Deep Image Representations by Inverting Them"☆11Jul 30, 2018Updated 7 years ago
- ☆12Mar 25, 2020Updated 5 years ago
- [NeurIPS 2019] Deep Leakage From Gradients☆475Apr 17, 2022Updated 3 years ago
- A repo to download and preprocess the Purchase100 dataset extracted from Kaggle: Acquire Valued Shoppers Challenge☆12Jun 21, 2021Updated 4 years ago
- Research into model inversion on SplitNN☆18Feb 20, 2024Updated 2 years ago
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆56May 28, 2019Updated 6 years ago
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆85Nov 22, 2021Updated 4 years ago
- PyTorch implementation of Security-Preserving Federated Learning via Byzantine-Sensitive Triplet Distance☆34Oct 11, 2024Updated last year
- Breaking Certifiable Defenses☆17Nov 22, 2022Updated 3 years ago
- Universal Adversarial Networks☆32Jul 30, 2018Updated 7 years ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆33Nov 4, 2020Updated 5 years ago
- Public implementation of ICML'19 paper "White-box vs Black-box: Bayes Optimal Strategies for Membership Inference"☆18May 28, 2020Updated 5 years ago
- The code for our Updates-Leak paper☆17Jul 23, 2020Updated 5 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Nov 11, 2020Updated 5 years ago
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆39Jan 28, 2019Updated 7 years ago
- [CCS'22] SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders☆18Jul 12, 2022Updated 3 years ago
- A TensorFlow Implementation of Punctuation Restoration.☆18Nov 9, 2020Updated 5 years ago
- [ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks" and "Be C…☆46Jul 18, 2025Updated 7 months ago
- Official code for the paper "Membership Inference Attacks Against Recommender Systems" (ACM CCS 2021)☆20Oct 8, 2024Updated last year
- FLTracer: Accurate Poisoning Attack Provenance in Federated Learning☆24Jun 14, 2024Updated last year
- ☆19Mar 6, 2023Updated 3 years ago
- Task-agnostic universal black-box attacks on computer vision neural network via procedural noise (CCS'19)☆56Dec 21, 2020Updated 5 years ago
- Code for Membership Inference Attack against Machine Learning Models (in Oakland 2017)☆199Nov 15, 2017Updated 8 years ago
- Code implementation of the paper "With Great Training Comes Great Vulnerability: Practical Attacks against Transfer Learning", at USENIX …☆19Nov 28, 2018Updated 7 years ago