stratosphereips / awesome-ml-privacy-attacks
An awesome list of papers on privacy attacks against machine learning
☆585Updated 11 months ago
Alternatives and similar repositories for awesome-ml-privacy-attacks:
Users that are interested in awesome-ml-privacy-attacks are comparing it to the libraries listed below
- ☆318Updated 2 months ago
- Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.☆624Updated 2 months ago
- A library for running membership inference attacks against ML models☆142Updated 2 years ago
- autodp: A flexible and easy-to-use package for differential privacy☆271Updated last year
- Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.☆353Updated 2 years ago
- ☆142Updated 4 months ago
- ☆180Updated last year
- Algorithms to recover input data from their gradient signal through a neural network☆283Updated last year
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆125Updated 10 months ago
- Code for Membership Inference Attack against Machine Learning Models (in Oakland 2017)☆194Updated 7 years ago
- Breaching privacy in federated learning scenarios for vision and text☆280Updated 10 months ago
- A curated list of Meachine learning Security & Privacy papers published in security top-4 conferences (IEEE S&P, ACM CCS, USENIX Security…☆244Updated 3 months ago
- Privacy Testing for Deep Learning☆198Updated last year
- [NeurIPS 2019] Deep Leakage From Gradients☆428Updated 2 years ago
- A codebase that makes differentially private training of transformers easy.☆170Updated 2 years ago
- [arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"☆157Updated last month
- Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)☆382Updated last month
- Differentially Private Optimization for PyTorch 👁🙅♀️☆184Updated 4 years ago
- Code implementation of the paper "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and P…☆278Updated 5 years ago
- Implementation of membership inference and model inversion attacks, extracting training data information from an ML model. Benchmarking …☆102Updated 5 years ago
- Differential private machine learning☆190Updated 3 years ago
- list of differential-privacy related resources☆307Updated last month
- A curated list of academic events on AI Security & Privacy☆146Updated 6 months ago
- A Python library for Secure and Explainable Machine Learning☆172Updated last month
- Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)☆289Updated 7 months ago
- Implementation of the paper : "Membership Inference Attacks Against Machine Learning Models", Shokri et al.☆58Updated 5 years ago
- PhD/MSc course on Machine Learning Security (Univ. Cagliari)☆208Updated 2 months ago
- resources about federated learning and privacy in machine learning☆531Updated 8 months ago
- Membership Inference, Attribute Inference and Model Inversion attacks implemented using PyTorch.☆58Updated 4 months ago
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆82Updated 3 years ago