google-research / lm-extraction-benchmarkView external linksLinks
☆300Jan 13, 2026Updated last month
Alternatives and similar repositories for lm-extraction-benchmark
Users that are interested in lm-extraction-benchmark are comparing it to the libraries listed below
Sorting:
- Official Code for ACL 2023 paper: "Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confid…☆23May 8, 2023Updated 2 years ago
- ☆39May 19, 2023Updated 2 years ago
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆86Sep 12, 2024Updated last year
- Certified Removal from Machine Learning Models☆69Aug 23, 2021Updated 4 years ago
- ☆13Oct 20, 2022Updated 3 years ago
- An awesome list of papers on privacy attacks against machine learning☆634Mar 18, 2024Updated last year
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆133Apr 9, 2024Updated last year
- Data for "Datamodels: Predicting Predictions with Training Data"☆97May 25, 2023Updated 2 years ago
- ☆370Jan 4, 2026Updated last month
- ☆14Feb 24, 2020Updated 5 years ago
- A codebase that makes differentially private training of transformers easy.☆183Dec 9, 2022Updated 3 years ago
- ☆28Aug 31, 2025Updated 5 months ago
- ☆15Feb 21, 2024Updated last year
- ☆78May 28, 2022Updated 3 years ago
- An Empirical Study of Memorization in NLP (ACL 2022)☆13Jun 22, 2022Updated 3 years ago
- ☆33Mar 13, 2025Updated 11 months ago
- Anupam Datta, Matt Fredrikson, Klas Leino, Kaiji Lu, Shayak Sen, Zifan Wang☆18Feb 23, 2021Updated 4 years ago
- A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]☆90Mar 24, 2023Updated 2 years ago
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆39Dec 27, 2022Updated 3 years ago
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word pred…☆103Aug 13, 2024Updated last year
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 5 years ago
- ☆48Feb 8, 2025Updated last year
- ☆60Mar 9, 2023Updated 2 years ago
- [ICLR'24 Spotlight] DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer☆46May 30, 2024Updated last year
- ☆21Sep 21, 2021Updated 4 years ago
- Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.☆698Apr 26, 2025Updated 9 months ago
- ☆1,257Jul 30, 2024Updated last year
- [ICLR 2022 official code] Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?☆29Mar 15, 2022Updated 3 years ago
- Official repo for the paper: Recovering Private Text in Federated Learning of Language Models (in NeurIPS 2022)☆61Mar 13, 2023Updated 2 years ago
- Official repo for the paper "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training" (https://arxiv.org/abs/2202.01181)☆25Oct 17, 2022Updated 3 years ago
- ☆25Aug 18, 2023Updated 2 years ago
- A fast, effective data attribution method for neural networks in PyTorch☆229Nov 18, 2024Updated last year
- [NeurIPS 2023 D&B Track] Code and data for paper "Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evalua…☆36Jun 8, 2023Updated 2 years ago
- A survey of privacy problems in Large Language Models (LLMs). Contains summary of the corresponding paper along with relevant code☆69May 30, 2024Updated last year
- [EMNLP 2025 Main] ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆39Aug 20, 2025Updated 5 months ago
- Code for "Differential Privacy Has Disparate Impact on Model Accuracy" NeurIPS'19☆33May 18, 2021Updated 4 years ago
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆40Jul 8, 2024Updated last year
- ☆37Mar 16, 2022Updated 3 years ago
- [arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"☆215May 30, 2025Updated 8 months ago