The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word prediction language models.
☆104Aug 13, 2024Updated last year
Alternatives and similar repositories for analysing_pii_leakage
Users that are interested in analysing_pii_leakage are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆13Oct 20, 2022Updated 3 years ago
- Official Code for ACL 2023 paper: "Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confid…☆23May 8, 2023Updated 2 years ago
- A Synthetic Dataset for Personal Attribute Inference (NeurIPS'24 D&B)☆53Jul 27, 2025Updated 8 months ago
- Code for Findings of ACL 2021 "Differential Privacy for Text Analytics via Natural Text Sanitization"☆32Mar 15, 2022Updated 4 years ago
- Code for Findings-EMNLP 2023 paper: Multi-step Jailbreaking Privacy Attacks on ChatGPT☆37Oct 15, 2023Updated 2 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- ☆40May 19, 2023Updated 2 years ago
- ☆12Jan 5, 2023Updated 3 years ago
- Differentially-private transformers using HuggingFace and Opacus☆147Aug 28, 2024Updated last year
- A codebase that makes differentially private training of transformers easy.☆185Dec 9, 2022Updated 3 years ago
- ☆28Nov 28, 2023Updated 2 years ago
- A toolkit to assess data privacy in LLMs (under development)☆71Jan 2, 2025Updated last year
- A fast algorithm to optimally compose privacy guarantees of differentially private (DP) mechanisms to arbitrary accuracy.☆76Feb 15, 2024Updated 2 years ago
- ☆73Feb 16, 2025Updated last year
- ☆78May 28, 2022Updated 3 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- Training data extraction on GPT-2☆195Feb 4, 2023Updated 3 years ago
- FedBERT : A federated approach that enables clients with limited computing resource to participate without violating data privacy.☆14Jul 3, 2023Updated 2 years ago
- The code and data for "Are Large Pre-Trained Language Models Leaking Your Personal Information?" (Findings of EMNLP '22)☆27Oct 31, 2022Updated 3 years ago
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆88Sep 12, 2024Updated last year
- ☆48Feb 8, 2025Updated last year
- Code for our NeurIPS 2024 paper Improved Generation of Adversarial Examples Against Safety-aligned LLMs☆12Nov 7, 2024Updated last year
- ☆59May 30, 2024Updated last year
- Python package for measuring memorization in LLMs.☆185Jul 16, 2025Updated 8 months ago
- [ICLR'24 Spotlight] DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer☆47May 30, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆303Mar 26, 2026Updated 2 weeks ago
- ☆24Aug 18, 2023Updated 2 years ago
- Benchmarking MIAs against LLMs.☆28Oct 8, 2024Updated last year
- Papers and resources related to the security and privacy of LLMs 🤖☆568Jun 8, 2025Updated 10 months ago
- Code for "CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples" (NDSS 2020)☆22Nov 14, 2020Updated 5 years ago
- ☆15Apr 27, 2024Updated last year
- Federated learning with text DNNs for DATA 591 at University of Washington.☆17Mar 25, 2023Updated 3 years ago
- [USENIX Security 2025] SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks☆20Sep 18, 2025Updated 6 months ago
- [NeurIPS'24] RedCode: Risky Code Execution and Generation Benchmark for Code Agents☆71Nov 14, 2025Updated 4 months ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- ☆53May 2, 2021Updated 4 years ago
- Code for Auditing DPSGD☆39Feb 15, 2022Updated 4 years ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Jun 29, 2025Updated 9 months ago
- ☆42May 23, 2023Updated 2 years ago
- Code for paper: "Spinning Language Models: Risks of Propaganda-as-a-Service and Countermeasures"☆21Jun 6, 2022Updated 3 years ago
- Source code of NAACL 2025 Findings "Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models"☆15Dec 16, 2025Updated 3 months ago
- ☆20Oct 28, 2025Updated 5 months ago