nv-tlabs / DPDM
Differentially Private Diffusion Models
☆92Updated last year
Alternatives and similar repositories for DPDM:
Users that are interested in DPDM are comparing it to the libraries listed below
- Official Pytorch repo of CVPR'23 and NeurIPS'23 papers on understanding replication in diffusion models.☆105Updated last year
- ☆58Updated last year
- Private Evolution: Generating DP Synthetic Data without Training [ICLR 2024, ICML 2024 Spotlight]☆89Updated last week
- ☆33Updated last year
- Official implementation of "GS-WGAN: A Gradient-Sanitized Approach for Learning Differentially Private Generators" (NeurIPS 2020)☆69Updated 2 years ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆68Updated 11 months ago
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆53Updated 11 months ago
- ☆80Updated 2 years ago
- code release for "Unrolling SGD: Understanding Factors Influencing Machine Unlearning" published at EuroS&P'22☆22Updated 2 years ago
- [CCS 2021] "DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation" by Boxin Wang*, Fan Wu*, Yunhui Long…☆37Updated 3 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Updated 2 years ago
- ☆40Updated last year
- ☆23Updated last year
- A codebase that makes differentially private training of transformers easy.☆170Updated 2 years ago
- Differentially-private transformers using HuggingFace and Opacus☆132Updated 6 months ago
- ☆10Updated 2 years ago
- Official repo to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023☆86Updated 5 months ago
- ☆65Updated last year
- Certified robustness "for free" using off-the-shelf diffusion models and classifiers☆38Updated last year
- ☆44Updated 6 months ago
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆33Updated 6 months ago
- [ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks" and "Be C…☆40Updated 6 months ago
- [ICCV 2023] Source code for our paper "Rickrolling the Artist: Injecting Invisible Backdoors into Text-Guided Image Generation Models".☆56Updated last year
- [NeurIPS 2024 D&B Track] UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models by Yihua Zhang, Cho…☆64Updated 3 months ago
- Algorithms for Privacy-Preserving Machine Learning in JAX☆92Updated 8 months ago
- ☆40Updated 3 years ago
- ☆58Updated 2 years ago
- This repo implements several algorithms for learning with differential privacy.☆105Updated 2 years ago
- [ICML 2024] Watermarks in the Sand: Impossibility of Strong Watermarking for Generative Models☆22Updated 5 months ago
- [ICML 2024 Spotlight] Differentially Private Synthetic Data via Foundation Model APIs 2: Text☆32Updated last month