jeffhj / LM_PersonalInfoLeakView external linksLinks
The code and data for "Are Large Pre-Trained Language Models Leaking Your Personal Information?" (Findings of EMNLP '22)
☆28Oct 31, 2022Updated 3 years ago
Alternatives and similar repositories for LM_PersonalInfoLeak
Users that are interested in LM_PersonalInfoLeak are comparing it to the libraries listed below
Sorting:
- ☆13Oct 20, 2022Updated 3 years ago
- ☆52May 2, 2021Updated 4 years ago
- ☆22Sep 17, 2024Updated last year
- ☆39May 19, 2023Updated 2 years ago
- [Preprint] Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis☆10Sep 23, 2021Updated 4 years ago
- Code for the WWW'23 paper "Sanitizing Sentence Embeddings (and Labels) for Local Differential Privacy"☆12Feb 20, 2023Updated 2 years ago
- Training data extraction on GPT-2☆197Feb 4, 2023Updated 3 years ago
- Source code for the paper "Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness"☆25Feb 12, 2020Updated 6 years ago
- [EMNLP 2022] Distillation-Resistant Watermarking (DRW) for Model Protection in NLP☆13Aug 17, 2023Updated 2 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 5 years ago
- ☆17Nov 30, 2022Updated 3 years ago
- A re-implementation of the "Extracting Training Data from Large Language Models" paper by Carlini et al., 2020☆38Jul 10, 2022Updated 3 years ago
- Synthesize a new dataset based on the original dataset for later machine learning to facilitate the data sharing from the customer.☆17Feb 4, 2025Updated last year
- ☆44Nov 17, 2024Updated last year
- ☆15Feb 21, 2024Updated last year
- Pytorch implementation of backdoor unlearning.☆21Jun 8, 2022Updated 3 years ago
- Camouflage poisoning via machine unlearning☆19Jul 3, 2025Updated 7 months ago
- ☆55Oct 4, 2024Updated last year
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Feb 18, 2025Updated 11 months ago
- ☆45Nov 10, 2019Updated 6 years ago
- TFLlib-Trustworthy Federated Learning Library and Benchmark☆62Nov 15, 2025Updated 3 months ago
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆20Jan 24, 2024Updated 2 years ago
- ☆25Aug 18, 2023Updated 2 years ago
- ☆21Mar 17, 2025Updated 10 months ago
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word pred…☆103Aug 13, 2024Updated last year
- ☆27Nov 20, 2023Updated 2 years ago
- ☆27Dec 15, 2022Updated 3 years ago
- Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"☆26Jan 7, 2022Updated 4 years ago
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆29Nov 19, 2023Updated 2 years ago
- Code for "Adversarial Illusions in Multi-Modal Embeddings"☆31Aug 4, 2024Updated last year
- TextHide: Tackling Data Privacy in Language Understanding Tasks☆31Apr 19, 2021Updated 4 years ago
- Machine Learning & Security Seminar @Purdue University☆25May 9, 2023Updated 2 years ago
- ☆12Dec 13, 2022Updated 3 years ago
- ☆11Dec 23, 2024Updated last year
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆30Jul 11, 2023Updated 2 years ago
- Library to facilitate pruning of LLMs based on context☆32Jan 31, 2024Updated 2 years ago
- ☆37Oct 17, 2024Updated last year
- ☆82Mar 26, 2024Updated last year
- Official implementation of the CVPR 2022 paper "Backdoor Attacks on Self-Supervised Learning".☆76Oct 24, 2023Updated 2 years ago