The code and data for "Are Large Pre-Trained Language Models Leaking Your Personal Information?" (Findings of EMNLP '22)
☆28Oct 31, 2022Updated 3 years ago
Alternatives and similar repositories for LM_PersonalInfoLeak
Users that are interested in LM_PersonalInfoLeak are comparing it to the libraries listed below
Sorting:
- ☆13Oct 20, 2022Updated 3 years ago
- ☆52May 2, 2021Updated 4 years ago
- Code for the WWW'23 paper "Sanitizing Sentence Embeddings (and Labels) for Local Differential Privacy"☆12Feb 20, 2023Updated 3 years ago
- [Preprint] Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis☆10Sep 23, 2021Updated 4 years ago
- A collection of implementations of fair ML algorithms☆12Jan 7, 2018Updated 8 years ago
- Training data extraction on GPT-2☆197Feb 4, 2023Updated 3 years ago
- [USENIX Security 2025] SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks☆20Sep 18, 2025Updated 5 months ago
- Source code for the paper "Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness"☆25Feb 12, 2020Updated 6 years ago
- [EMNLP 2022] Distillation-Resistant Watermarking (DRW) for Model Protection in NLP☆13Aug 17, 2023Updated 2 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 6 years ago
- ☆17Nov 30, 2022Updated 3 years ago
- Synthesize a new dataset based on the original dataset for later machine learning to facilitate the data sharing from the customer.☆17Feb 4, 2025Updated last year
- ☆15Feb 21, 2024Updated 2 years ago
- ☆43May 23, 2023Updated 2 years ago
- Pytorch implementation of backdoor unlearning.☆21Jun 8, 2022Updated 3 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Aug 24, 2022Updated 3 years ago
- ☆56Oct 4, 2024Updated last year
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Feb 18, 2025Updated last year
- Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''☆20Aug 9, 2023Updated 2 years ago
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆20Jan 24, 2024Updated 2 years ago
- ☆27Nov 20, 2023Updated 2 years ago
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word pred…☆104Aug 13, 2024Updated last year
- ☆27Dec 15, 2022Updated 3 years ago
- A principled library for tuning, training and evaluating tabular data synthesis on fidelity, privacy and utility. CCS 2025.☆26Aug 17, 2025Updated 6 months ago
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆29Nov 19, 2023Updated 2 years ago
- Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"☆26Jan 7, 2022Updated 4 years ago
- Code for "Adversarial Illusions in Multi-Modal Embeddings"☆32Aug 4, 2024Updated last year
- Machine Learning & Security Seminar @Purdue University☆25May 9, 2023Updated 2 years ago
- TextHide: Tackling Data Privacy in Language Understanding Tasks☆31Apr 19, 2021Updated 4 years ago
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆30Jul 11, 2023Updated 2 years ago
- ☆11Dec 23, 2024Updated last year
- Library to facilitate pruning of LLMs based on context☆32Jan 31, 2024Updated 2 years ago
- ☆37Oct 17, 2024Updated last year
- Official implementation of the CVPR 2022 paper "Backdoor Attacks on Self-Supervised Learning".☆76Oct 24, 2023Updated 2 years ago
- Model Poisoning Attack to Federated Recommendation☆32Apr 23, 2022Updated 3 years ago
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆87Sep 12, 2024Updated last year
- ☆12May 6, 2022Updated 3 years ago
- A virtual caregiver system that extracts the expression of mental and physical health states through dialogue-based human-computer intera…☆14Jan 29, 2023Updated 3 years ago
- ☆10Feb 13, 2025Updated last year