microsoft / analysing_pii_leakageView external linksLinks
The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word prediction language models.
☆103Aug 13, 2024Updated last year
Alternatives and similar repositories for analysing_pii_leakage
Users that are interested in analysing_pii_leakage are comparing it to the libraries listed below
Sorting:
- ☆13Oct 20, 2022Updated 3 years ago
- Official Code for ACL 2023 paper: "Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confid…☆23May 8, 2023Updated 2 years ago
- A Synthetic Dataset for Personal Attribute Inference (NeurIPS'24 D&B)☆50Jul 27, 2025Updated 6 months ago
- Code for Findings of ACL 2021 "Differential Privacy for Text Analytics via Natural Text Sanitization"☆32Mar 15, 2022Updated 3 years ago
- ☆39May 19, 2023Updated 2 years ago
- Differentially-private transformers using HuggingFace and Opacus☆146Aug 28, 2024Updated last year
- 🤫 Code and benchmark for our ICLR 2024 spotlight paper: "Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Con…☆50Dec 20, 2023Updated 2 years ago
- ☆25Apr 15, 2024Updated last year
- ☆12Jan 5, 2023Updated 3 years ago
- A codebase that makes differentially private training of transformers easy.☆183Dec 9, 2022Updated 3 years ago
- ☆78May 28, 2022Updated 3 years ago
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆86Sep 12, 2024Updated last year
- ☆70Feb 16, 2025Updated 11 months ago
- ☆48Feb 8, 2025Updated last year
- ☆28Nov 28, 2023Updated 2 years ago
- The code and data for "Are Large Pre-Trained Language Models Leaking Your Personal Information?" (Findings of EMNLP '22)☆28Oct 31, 2022Updated 3 years ago
- A fast algorithm to optimally compose privacy guarantees of differentially private (DP) mechanisms to arbitrary accuracy.☆76Feb 15, 2024Updated 2 years ago
- Training data extraction on GPT-2☆197Feb 4, 2023Updated 3 years ago
- Python package for measuring memorization in LLMs.☆184Jul 16, 2025Updated 7 months ago
- ☆58May 30, 2024Updated last year
- ☆60Mar 9, 2023Updated 2 years ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Jun 29, 2025Updated 7 months ago
- Code for "CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples" (NDSS 2020)☆22Nov 14, 2020Updated 5 years ago
- ☆10Oct 2, 2024Updated last year
- A toolkit to assess data privacy in LLMs (under development)☆67Jan 2, 2025Updated last year
- ☆300Jan 13, 2026Updated last month
- Official repo to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023☆96Sep 17, 2025Updated 4 months ago
- Code for Auditing DPSGD☆37Feb 15, 2022Updated 4 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- ☆25Aug 18, 2023Updated 2 years ago
- Papers and resources related to the security and privacy of LLMs 🤖☆561Jun 8, 2025Updated 8 months ago
- ☆43May 23, 2023Updated 2 years ago
- The goal of this project was to develop a chat-bot based data collection tool. It asks users questions through a validated alignment surv…☆13Feb 6, 2026Updated last week
- ☆27Nov 20, 2023Updated 2 years ago
- [ICLR'24 Spotlight] DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer☆46May 30, 2024Updated last year
- ☆28Aug 31, 2025Updated 5 months ago
- [NeurIPS'24] RedCode: Risky Code Execution and Generation Benchmark for Code Agents☆65Nov 14, 2025Updated 3 months ago
- CCS 2023 | Explainable malware and vulnerability detection with XAI in paper "FINER: Enhancing State-of-the-art Classifiers with Feature …☆11Aug 20, 2024Updated last year
- Code for Findings-ACL 2023 paper: Sentence Embedding Leaks More Information than You Expect: Generative Embedding Inversion Attack to Rec…☆47Jun 3, 2024Updated last year