ebagdasa / differential-privacy-vs-fairnessView external linksLinks
Code for "Differential Privacy Has Disparate Impact on Model Accuracy" NeurIPS'19
☆33May 18, 2021Updated 4 years ago
Alternatives and similar repositories for differential-privacy-vs-fairness
Users that are interested in differential-privacy-vs-fairness are comparing it to the libraries listed below
Sorting:
- Code to reproduce experiments in "Antipodes of Label Differential Privacy PATE and ALIBI"☆32Apr 25, 2022Updated 3 years ago
- Code for Auditing DPSGD☆37Feb 15, 2022Updated 4 years ago
- ☆10Jun 1, 2022Updated 3 years ago
- Code for "Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment" (CCS 2019)☆48Dec 17, 2019Updated 6 years ago
- ☆17Dec 13, 2019Updated 6 years ago
- Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"☆13Aug 22, 2022Updated 3 years ago
- SAP Security Research sample code to reproduce the research done in our paper "Comparing local and central differential privacy using mem…☆18May 7, 2024Updated last year
- ☆17Aug 13, 2020Updated 5 years ago
- ☆15Dec 9, 2020Updated 5 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆68Sep 11, 2021Updated 4 years ago
- Stores paper references, outputs to bib/html, does basic sanity checking on bib entries☆40Jul 10, 2025Updated 7 months ago
- autodp: A flexible and easy-to-use package for differential privacy☆278Dec 5, 2023Updated 2 years ago
- Universal Adversarial Networks☆32Jul 30, 2018Updated 7 years ago
- ☆14Feb 24, 2020Updated 5 years ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆71Feb 13, 2024Updated 2 years ago
- Learning rate adaptation for differentially private stochastic gradient descent☆17Apr 23, 2021Updated 4 years ago
- [ICLR 2025] Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization☆25Jan 27, 2026Updated 2 weeks ago
- Source code for paper "How to Backdoor Federated Learning" (https://arxiv.org/abs/1807.00459)☆313Jul 25, 2024Updated last year
- This technique modifies image data so that any model trained on it will bear an identifiable mark.☆44Aug 13, 2021Updated 4 years ago
- ☆23Dec 15, 2022Updated 3 years ago
- ☆22Sep 17, 2024Updated last year
- Code for the paper "Bayesian Differential Privacy for Machine Learning"☆23Aug 12, 2020Updated 5 years ago
- ☆19Mar 6, 2023Updated 2 years ago
- ☆52May 2, 2021Updated 4 years ago
- Website & Documentation: https://sbaresearch.github.io/model-watermarking/☆25Sep 22, 2023Updated 2 years ago
- Code for the paper "Firewalls to Secure Dynamic LLM Agentic Networks"☆27Jun 6, 2025Updated 8 months ago
- Data and code related to the report "Truth, Lies, and Automation: How Language Models Could Change Disinformation"☆28May 18, 2021Updated 4 years ago
- Code for computing tight guarantees for differential privacy☆23Mar 3, 2023Updated 2 years ago
- LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins☆29Jul 29, 2024Updated last year
- Implementation of calibration bounds for differential privacy in the shuffle model☆21Nov 10, 2020Updated 5 years ago
- Code for Membership Inference Attack against Machine Learning Models (in Oakland 2017)☆199Nov 15, 2017Updated 8 years ago
- A unified benchmark problem for data poisoning attacks☆161Oct 4, 2023Updated 2 years ago
- A sybil-resilient distributed learning protocol.☆110Sep 9, 2025Updated 5 months ago
- Simplicial-FL to manage client device heterogeneity in Federated Learning☆22Aug 3, 2023Updated 2 years ago
- FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods.☆31May 10, 2024Updated last year
- ☆26Jan 25, 2019Updated 7 years ago
- AdvDoor: Adversarial Backdoor Attack of Deep Learning System☆32Nov 5, 2024Updated last year
- A Comprehensive and Versatile Open-Source Federated Learning Framework☆33Apr 3, 2023Updated 2 years ago
- Training PyTorch models with differential privacy☆1,903Nov 12, 2025Updated 3 months ago