Code for "Differential Privacy Has Disparate Impact on Model Accuracy" NeurIPS'19
☆33May 18, 2021Updated 4 years ago
Alternatives and similar repositories for differential-privacy-vs-fairness
Users that are interested in differential-privacy-vs-fairness are comparing it to the libraries listed below
Sorting:
- Code to reproduce experiments in "Antipodes of Label Differential Privacy PATE and ALIBI"☆32Apr 25, 2022Updated 3 years ago
- Code for Auditing DPSGD☆37Feb 15, 2022Updated 4 years ago
- ☆10Jun 1, 2022Updated 3 years ago
- Code for "Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment" (CCS 2019)☆49Dec 17, 2019Updated 6 years ago
- SAP Security Research sample code to reproduce the research done in our paper "Comparing local and central differential privacy using mem…☆18May 7, 2024Updated last year
- ☆17Aug 13, 2020Updated 5 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆68Sep 11, 2021Updated 4 years ago
- Stores paper references, outputs to bib/html, does basic sanity checking on bib entries☆40Jul 10, 2025Updated 7 months ago
- ☆14Feb 24, 2020Updated 6 years ago
- Universal Adversarial Networks☆32Jul 30, 2018Updated 7 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆71Feb 13, 2024Updated 2 years ago
- Learning rate adaptation for differentially private stochastic gradient descent☆17Apr 23, 2021Updated 4 years ago
- This technique modifies image data so that any model trained on it will bear an identifiable mark.☆44Aug 13, 2021Updated 4 years ago
- ☆23Dec 15, 2022Updated 3 years ago
- ☆22Sep 17, 2024Updated last year
- ☆50Feb 27, 2021Updated 5 years ago
- ☆19Mar 6, 2023Updated 3 years ago
- ☆27Oct 17, 2022Updated 3 years ago
- Website & Documentation: https://sbaresearch.github.io/model-watermarking/☆25Sep 22, 2023Updated 2 years ago
- Code for the paper "Firewalls to Secure Dynamic LLM Agentic Networks"☆27Jun 6, 2025Updated 9 months ago
- This project's goal is to evaluate the privacy leakage of differentially private machine learning models.☆136Dec 8, 2022Updated 3 years ago
- Code for computing tight guarantees for differential privacy☆23Mar 3, 2023Updated 3 years ago
- LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins☆29Jul 29, 2024Updated last year
- Implementation of calibration bounds for differential privacy in the shuffle model☆21Nov 10, 2020Updated 5 years ago
- Code for Membership Inference Attack against Machine Learning Models (in Oakland 2017)☆199Nov 15, 2017Updated 8 years ago
- ☆26Jan 25, 2019Updated 7 years ago
- Simplicial-FL to manage client device heterogeneity in Federated Learning☆22Aug 3, 2023Updated 2 years ago
- AdvDoor: Adversarial Backdoor Attack of Deep Learning System☆32Nov 5, 2024Updated last year
- FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods.☆31May 10, 2024Updated last year
- Related material on Federated Learning☆26Apr 9, 2020Updated 5 years ago
- A sybil-resilient distributed learning protocol.☆112Sep 9, 2025Updated 6 months ago
- The code and data for "Are Large Pre-Trained Language Models Leaking Your Personal Information?" (Findings of EMNLP '22)☆28Oct 31, 2022Updated 3 years ago
- A Comprehensive and Versatile Open-Source Federated Learning Framework☆33Apr 3, 2023Updated 2 years ago
- Training PyTorch models with differential privacy☆1,908Feb 26, 2026Updated last week
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆30Jul 11, 2023Updated 2 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Oct 10, 2022Updated 3 years ago
- Library for training machine learning models with privacy for training data☆1,999Jan 27, 2026Updated last month
- An implementation of BGV-FHE scheme☆28Apr 26, 2018Updated 7 years ago