Shadow Attack, LiRA, Quantile Regression and RMIA implementations in PyTorch (Online version)
☆14Nov 8, 2024Updated last year
Alternatives and similar repositories for mia_attacks
Users that are interested in mia_attacks are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- An unofficial pyotrch implementation of "ML-Leaks:Model and Data Independent Membership Inference Attacks and Defenses on ML Models"☆11Dec 23, 2023Updated 2 years ago
- ☆15Apr 4, 2024Updated 2 years ago
- ☆25Nov 14, 2022Updated 3 years ago
- Public implementation of the paper "On the Importance of Difficulty Calibration in Membership Inference Attacks".☆16Dec 1, 2021Updated 4 years ago
- FederBoost's Federated Gradient Boosting Decision Tree Algorithm, Federated enabled Membership Inference☆16Dec 13, 2023Updated 2 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- 🔒 Implementation of Shokri et al(2016) "Membership Inference Attacks against Machine Learning Models"☆34Aug 29, 2022Updated 3 years ago
- Likelihood Ratio Attack (LiRA) in PyTorch☆17Mar 3, 2025Updated last year
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Aug 18, 2022Updated 3 years ago
- Security evaluation module with onnx, pytorch, and SecML.☆13Apr 9, 2022Updated 4 years ago
- Membership Inference Attack on Federated Learning☆12Jan 14, 2022Updated 4 years ago
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆132Apr 9, 2024Updated 2 years ago
- ☆32Sep 2, 2024Updated last year
- ☆16Oct 1, 2025Updated 6 months ago
- ☆17Jan 26, 2025Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆13Apr 12, 2022Updated 4 years ago
- Processed datasets that we have used in our research☆15Apr 30, 2020Updated 5 years ago
- Source code for the Energy-Latency Attacks via Sponge Poisoning paper.☆14Mar 14, 2022Updated 4 years ago
- ☆25Jan 20, 2019Updated 7 years ago
- The PackNet Continual Learning Method in Pytorch☆15Aug 19, 2021Updated 4 years ago
- Public implementation of ICML'19 paper "White-box vs Black-box: Bayes Optimal Strategies for Membership Inference"☆18May 28, 2020Updated 5 years ago
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Jul 12, 2022Updated 3 years ago
- ☆46Nov 10, 2019Updated 6 years ago
- One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks (ICLR 2023 Spotlight)☆14Sep 28, 2025Updated 6 months ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Attack benchmark repository☆23Nov 25, 2025Updated 4 months ago
- Compiler for BitML☆27Mar 10, 2022Updated 4 years ago
- Campus App of Ruhr-University Bochum☆24Mar 30, 2026Updated last week
- Official code for the paper "Membership Inference Attacks Against Recommender Systems" (ACM CCS 2021)☆21Oct 8, 2024Updated last year
- [ICLR 2024] "Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality" by Xuxi Chen*, Yu Yang*, Zhangyang Wang, Baha…☆15May 18, 2024Updated last year
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Sep 6, 2023Updated 2 years ago
- Code for "Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders" at ICML 2024☆10Sep 18, 2025Updated 6 months ago
- Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples☆11Oct 14, 2024Updated last year
- Real-time visualization of sentiment analysis on text input☆26May 20, 2025Updated 10 months ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Implementation of the paper : "Membership Inference Attacks Against Machine Learning Models", Shokri et al.☆59May 12, 2019Updated 6 years ago
- Implementation for <Understanding Robust Overftting of Adversarial Training and Beyond> in ICML'22.☆13Jul 1, 2022Updated 3 years ago
- ☆10Jun 2, 2021Updated 4 years ago
- ☆13Mar 14, 2022Updated 4 years ago
- User handbook for mist-v2☆27Dec 16, 2023Updated 2 years ago
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆83Nov 22, 2021Updated 4 years ago
- Github Actions: run code with EasyConnect VPN !☆18Jul 18, 2021Updated 4 years ago