ganeshdg95 / Leveraging-Adversarial-Examples-to-Quantify-Membership-Information-LeakageView on GitHub
☆19Mar 6, 2023Updated 3 years ago
Alternatives and similar repositories for Leveraging-Adversarial-Examples-to-Quantify-Membership-Information-Leakage
Users that are interested in Leveraging-Adversarial-Examples-to-Quantify-Membership-Information-Leakage are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Sep 6, 2023Updated 2 years ago
- Code for Auditing Data Provenance in Text-Generation Models (in KDD 2019)☆10Jun 18, 2019Updated 6 years ago
- ☆25Jan 20, 2019Updated 7 years ago
- Code for AAAI 2021 Paper "Membership Privacy for Machine Learning Models Through Knowledge Transfer"☆11Apr 5, 2021Updated 5 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆67Sep 11, 2021Updated 4 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆132Apr 9, 2024Updated 2 years ago
- ☆23Aug 15, 2022Updated 3 years ago
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆37Jan 28, 2019Updated 7 years ago
- Public implementation of the paper "On the Importance of Difficulty Calibration in Membership Inference Attacks".☆16Dec 1, 2021Updated 4 years ago
- The code for our Updates-Leak paper☆17Jul 23, 2020Updated 5 years ago
- ☆13Sep 26, 2024Updated last year
- Python package to create adversarial agents for membership inference attacks againts machine learning models☆46Feb 12, 2019Updated 7 years ago
- ☆46Nov 10, 2019Updated 6 years ago
- ☆10Jun 5, 2021Updated 4 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Code for Auditing DPSGD☆39Feb 15, 2022Updated 4 years ago
- Kaggle Heritage Health Prize Challenge☆19Dec 5, 2023Updated 2 years ago
- Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs (ACM CCS'21)☆17Jan 11, 2023Updated 3 years ago
- ☆25Nov 14, 2022Updated 3 years ago
- ☆373Apr 8, 2026Updated last week
- Official code for FAccT'21 paper "Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning" https://arxiv.org/abs…☆13Mar 9, 2021Updated 5 years ago
- Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.☆707Apr 26, 2025Updated 11 months ago
- A repo to download and preprocess the Purchase100 dataset extracted from Kaggle: Acquire Valued Shoppers Challenge☆12Jun 21, 2021Updated 4 years ago
- ☆12Jun 8, 2021Updated 4 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Causal Reasoning for Membership Inference Attacks☆11Oct 21, 2022Updated 3 years ago
- ☆32Sep 2, 2024Updated last year
- Code for ML Doctor☆91Aug 14, 2024Updated last year
- This project's goal is to evaluate the privacy leakage of differentially private machine learning models.☆135Dec 8, 2022Updated 3 years ago
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆56May 28, 2019Updated 6 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Aug 18, 2022Updated 3 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆57May 4, 2023Updated 2 years ago
- ☆15Apr 4, 2024Updated 2 years ago
- https://icml.cc/virtual/2023/poster/24354☆10Aug 15, 2023Updated 2 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- ☆16Apr 16, 2019Updated 7 years ago
- Modular framework for property inference attacks on deep neural networks☆19Jun 8, 2023Updated 2 years ago
- Craft poisoned data using MetaPoison☆54Apr 5, 2021Updated 5 years ago
- ☆12Jan 28, 2023Updated 3 years ago
- ☆14May 8, 2024Updated last year
- TIPRDC: Task-Independent Privacy-Respecting Data Crowdsourcing Framework for Deep Learning with Anonymized Intermediate Representations☆20Dec 27, 2020Updated 5 years ago
- [USENIX Security 2025] SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks☆20Sep 18, 2025Updated 7 months ago