microsoft / responsible-ai-toolbox-privacy
A library for statistically estimating the privacy of ML pipelines from membership inference attacks
☆33Updated last week
Alternatives and similar repositories for responsible-ai-toolbox-privacy:
Users that are interested in responsible-ai-toolbox-privacy are comparing it to the libraries listed below
- Differentially-private transformers using HuggingFace and Opacus☆132Updated 7 months ago
- Python library for implementing Responsible AI mitigations.☆65Updated last year
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word pred…☆90Updated 7 months ago
- Membership Inference Competition☆31Updated last year
- A fast algorithm to optimally compose privacy guarantees of differentially private (DP) mechanisms to arbitrary accuracy.☆73Updated last year
- ☆22Updated 2 years ago
- ☆72Updated 2 years ago
- A survey of privacy problems in Large Language Models (LLMs). Contains summary of the corresponding paper along with relevant code☆67Updated 9 months ago
- ☆24Updated last year
- A codebase that makes differentially private training of transformers easy.☆171Updated 2 years ago
- Private Evolution: Generating DP Synthetic Data without Training [ICLR 2024, ICML 2024 Spotlight]☆91Updated last month
- Algorithms for Privacy-Preserving Machine Learning in JAX☆93Updated 9 months ago
- Official github page for the paper "Evaluating Deep Unlearning in Large Language Model"☆14Updated last month
- Python package for measuring memorization in LLMs.☆146Updated 4 months ago
- ☆9Updated 4 years ago
- [ICLR'24 Spotlight] DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer☆39Updated 9 months ago
- The code and data for "Are Large Pre-Trained Language Models Leaking Your Personal Information?" (Findings of EMNLP '22)☆18Updated 2 years ago
- Code for paper: "Spinning Language Models: Risks of Propaganda-as-a-Service and Countermeasures"☆21Updated 2 years ago
- A curated list of trustworthy Generative AI papers. Daily updating...☆71Updated 6 months ago
- [NeurIPS 2021] "G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators" by Yunhui Long*…☆30Updated 3 years ago
- Proof of concept code for poisoning code generation models.☆45Updated last year
- Code for Auditing DPSGD☆37Updated 3 years ago
- ☆18Updated 3 years ago
- ☆31Updated 6 months ago
- [ICML 2024 Spotlight] Differentially Private Synthetic Data via Foundation Model APIs 2: Text☆34Updated 2 months ago
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆125Updated 11 months ago
- ☆11Updated 2 years ago
- ☆9Updated 3 years ago
- ☆142Updated 5 months ago
- ☆23Updated last year