microsoft / responsible-ai-toolbox-privacy
A library for statistically estimating the privacy of ML pipelines from membership inference attacks
☆34Updated last week
Alternatives and similar repositories for responsible-ai-toolbox-privacy:
Users that are interested in responsible-ai-toolbox-privacy are comparing it to the libraries listed below
- Membership Inference Competition☆31Updated last year
- Differentially-private transformers using HuggingFace and Opacus☆132Updated 5 months ago
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word pred…☆87Updated 6 months ago
- Algorithms for Privacy-Preserving Machine Learning in JAX☆93Updated 7 months ago
- A codebase that makes differentially private training of transformers easy.☆168Updated 2 years ago
- A fast algorithm to optimally compose privacy guarantees of differentially private (DP) mechanisms to arbitrary accuracy.☆73Updated last year
- Code for "Differential Privacy Has Disparate Impact on Model Accuracy" NeurIPS'19☆34Updated 3 years ago
- The code and data for "Are Large Pre-Trained Language Models Leaking Your Personal Information?" (Findings of EMNLP '22)☆18Updated 2 years ago
- Private Evolution: Generating DP Synthetic Data without Training [ICLR 2024, ICML 2024 Spotlight]☆87Updated this week
- ☆71Updated 2 years ago
- Python library for implementing Responsible AI mitigations.☆65Updated last year
- ☆22Updated 2 years ago
- Code for Auditing DPSGD☆37Updated 3 years ago
- ☆9Updated 4 years ago
- A survey of privacy problems in Large Language Models (LLMs). Contains summary of the corresponding paper along with relevant code☆65Updated 8 months ago
- ☆31Updated 5 months ago
- This repo implements several algorithms for learning with differential privacy.☆104Updated 2 years ago
- ☆27Updated 2 years ago
- ☆141Updated 4 months ago
- Research simulation toolkit for federated learning☆12Updated 4 years ago
- ☆23Updated last year
- ☆80Updated 2 years ago
- ☆23Updated last year
- ☆18Updated 3 years ago
- Python package to create adversarial agents for membership inference attacks againts machine learning models☆46Updated 6 years ago
- Implementation of the paper "Exploring the Universal Vulnerability of Prompt-based Learning Paradigm" on Findings of NAACL 2022☆29Updated 2 years ago
- Fast, memory-efficient, scalable optimization of deep learning with differential privacy☆112Updated last month
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆38Updated 6 years ago
- ☆23Updated last year
- SDNist: Benchmark data and evaluation tools for data synthesizers.☆34Updated this week