☆301Jan 13, 2026Updated last month
Alternatives and similar repositories for lm-extraction-benchmark
Users that are interested in lm-extraction-benchmark are comparing it to the libraries listed below
Sorting:
- Training data extraction on GPT-2☆197Feb 4, 2023Updated 3 years ago
- ☆43May 23, 2023Updated 2 years ago
- Official Code for ACL 2023 paper: "Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confid…☆23May 8, 2023Updated 2 years ago
- ☆39May 19, 2023Updated 2 years ago
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆87Sep 12, 2024Updated last year
- The code and data for "Are Large Pre-Trained Language Models Leaking Your Personal Information?" (Findings of EMNLP '22)☆28Oct 31, 2022Updated 3 years ago
- Python package for measuring memorization in LLMs.☆183Jul 16, 2025Updated 7 months ago
- Certified Removal from Machine Learning Models☆69Aug 23, 2021Updated 4 years ago
- ☆13Oct 20, 2022Updated 3 years ago
- An awesome list of papers on privacy attacks against machine learning☆634Mar 18, 2024Updated last year
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆133Apr 9, 2024Updated last year
- Data for "Datamodels: Predicting Predictions with Training Data"☆97May 25, 2023Updated 2 years ago
- ☆14Feb 24, 2020Updated 6 years ago
- A codebase that makes differentially private training of transformers easy.☆183Dec 9, 2022Updated 3 years ago
- ☆29Aug 31, 2025Updated 6 months ago
- ☆15Feb 21, 2024Updated 2 years ago
- ☆78May 28, 2022Updated 3 years ago
- An Empirical Study of Memorization in NLP (ACL 2022)☆13Jun 22, 2022Updated 3 years ago
- ☆33Mar 13, 2025Updated 11 months ago
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆58Mar 20, 2024Updated last year
- Anupam Datta, Matt Fredrikson, Klas Leino, Kaiji Lu, Shayak Sen, Zifan Wang☆18Feb 23, 2021Updated 5 years ago
- A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]☆90Mar 24, 2023Updated 2 years ago
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆39Dec 27, 2022Updated 3 years ago
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word pred…☆104Aug 13, 2024Updated last year
- ☆44Nov 17, 2024Updated last year
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 6 years ago
- ☆48Feb 8, 2025Updated last year
- ☆60Mar 9, 2023Updated 2 years ago
- [ICLR'24 Spotlight] DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer☆46May 30, 2024Updated last year
- ☆21Sep 21, 2021Updated 4 years ago
- Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.☆702Apr 26, 2025Updated 10 months ago
- [ICLR 2022 official code] Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?☆29Mar 15, 2022Updated 3 years ago
- ☆1,258Jul 30, 2024Updated last year
- Official repo for the paper: Recovering Private Text in Federated Learning of Language Models (in NeurIPS 2022)☆61Mar 13, 2023Updated 2 years ago
- ☆24Aug 18, 2023Updated 2 years ago
- Official repo for the paper "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training" (https://arxiv.org/abs/2202.01181)☆25Oct 17, 2022Updated 3 years ago
- A fast, effective data attribution method for neural networks in PyTorch☆232Nov 18, 2024Updated last year
- [NeurIPS 2023 D&B Track] Code and data for paper "Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evalua…☆36Jun 8, 2023Updated 2 years ago
- A survey of privacy problems in Large Language Models (LLMs). Contains summary of the corresponding paper along with relevant code☆69May 30, 2024Updated last year