microsoft / DPSDALinks
Private Evolution: Generating DP Synthetic Data without Training [ICLR 2024, ICML 2024 Spotlight]
☆109Updated 3 weeks ago
Alternatives and similar repositories for DPSDA
Users that are interested in DPSDA are comparing it to the libraries listed below
Sorting:
- Differentially-private transformers using HuggingFace and Opacus☆145Updated last year
- Fast, memory-efficient, scalable optimization of deep learning with differential privacy☆135Updated 3 months ago
- [ICML 2024 Spotlight] Differentially Private Synthetic Data via Foundation Model APIs 2: Text☆51Updated 10 months ago
- A codebase that makes differentially private training of transformers easy.☆181Updated 2 years ago
- Algorithms for Privacy-Preserving Machine Learning in JAX☆134Updated this week
- ☆15Updated 2 years ago
- A fast algorithm to optimally compose privacy guarantees of differentially private (DP) mechanisms to arbitrary accuracy.☆75Updated last year
- Code related to the paper "Machine Unlearning of Features and Labels"☆72Updated last year
- Differentially Private Diffusion Models☆104Updated last year
- ☆49Updated last year
- ☆77Updated 3 years ago
- Official codes for "Understanding Deep Gradient Leakage via Inversion Influence Functions", NeurIPS 2023☆16Updated 2 years ago
- Code to reproduce experiments in "Antipodes of Label Differential Privacy PATE and ALIBI"☆32Updated 3 years ago
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word pred…☆101Updated last year
- [CCS 2025] DPImageBench is an open-source toolkit developed to facilitate the research and application of DP image synthesis.☆25Updated last month
- ☆10Updated 3 years ago
- A survey of privacy problems in Large Language Models (LLMs). Contains summary of the corresponding paper along with relevant code☆68Updated last year
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Updated 3 years ago
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆42Updated last year
- DP-FTRL from "Practical and Private (Deep) Learning without Sampling or Shuffling" for centralized training.☆34Updated 6 months ago
- ☆29Updated 2 years ago
- ☆196Updated 2 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Updated 3 years ago
- Code for Auditing DPSGD☆37Updated 3 years ago
- code release for "Unrolling SGD: Understanding Factors Influencing Machine Unlearning" published at EuroS&P'22☆23Updated 3 years ago
- Official implementation of "When Machine Unlearning Jeopardizes Privacy" (ACM CCS 2021)☆50Updated 3 years ago
- Private Adaptive Optimization with Side Information (ICML '22)☆16Updated 3 years ago
- This repo implements several algorithms for learning with differential privacy.☆111Updated 2 years ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆81Updated last year
- [NeurIPS 2021] "G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators" by Yunhui Long*…☆31Updated 4 years ago