☆303Apr 8, 2026Updated last week
Alternatives and similar repositories for lm-extraction-benchmark
Users that are interested in lm-extraction-benchmark are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Training data extraction on GPT-2☆195Feb 4, 2023Updated 3 years ago
- ☆42May 23, 2023Updated 2 years ago
- Official Code for ACL 2023 paper: "Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confid…☆23May 8, 2023Updated 2 years ago
- ☆40May 19, 2023Updated 2 years ago
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆88Sep 12, 2024Updated last year
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- The code and data for "Are Large Pre-Trained Language Models Leaking Your Personal Information?" (Findings of EMNLP '22)☆27Oct 31, 2022Updated 3 years ago
- Python package for measuring memorization in LLMs.☆187Jul 16, 2025Updated 9 months ago
- An awesome list of papers on privacy attacks against machine learning☆633Mar 18, 2024Updated 2 years ago
- ☆13Oct 20, 2022Updated 3 years ago
- ☆32Mar 13, 2025Updated last year
- Certified Removal from Machine Learning Models☆69Aug 23, 2021Updated 4 years ago
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆132Apr 9, 2024Updated 2 years ago
- ☆29Aug 31, 2025Updated 7 months ago
- ☆373Apr 8, 2026Updated last week
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- ☆79May 28, 2022Updated 3 years ago
- ☆48Feb 8, 2025Updated last year
- A survey of privacy problems in Large Language Models (LLMs). Contains summary of the corresponding paper along with relevant code☆69May 30, 2024Updated last year
- This is the official implementation of our paper 'Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protecti…☆58Mar 20, 2024Updated 2 years ago
- ☆44Nov 17, 2024Updated last year
- ☆13Feb 24, 2020Updated 6 years ago
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word pred…☆104Aug 13, 2024Updated last year
- Official repo for the paper: Recovering Private Text in Federated Learning of Language Models (in NeurIPS 2022)☆61Mar 13, 2023Updated 3 years ago
- interesting & promising & widely adopted tricks for SOTA performance in machine learning community.☆15Apr 13, 2021Updated 5 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Data for "Datamodels: Predicting Predictions with Training Data"☆96May 25, 2023Updated 2 years ago
- ☆1,268Jul 30, 2024Updated last year
- ☆15Feb 21, 2024Updated 2 years ago
- A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]☆89Mar 24, 2023Updated 3 years ago
- [ICLR 2022 official code] Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?☆29Mar 15, 2022Updated 4 years ago
- An Empirical Study of Memorization in NLP (ACL 2022)☆13Jun 22, 2022Updated 3 years ago
- [ICLR 2025] On Evluating the Durability of Safegurads for Open-Weight LLMs☆13Jun 20, 2025Updated 10 months ago
- A codebase that makes differentially private training of transformers easy.☆186Dec 9, 2022Updated 3 years ago
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆40Jul 8, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Code for a research paper "Part-Based Models Improve Adversarial Robustness" (ICLR 2023)☆21Sep 16, 2023Updated 2 years ago
- [ArXiv 2025] Denial-of-Service Poisoning Attacks on Large Language Models☆23Oct 22, 2024Updated last year
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆39Dec 27, 2022Updated 3 years ago
- Anupam Datta, Matt Fredrikson, Klas Leino, Kaiji Lu, Shayak Sen, Zifan Wang☆18Feb 23, 2021Updated 5 years ago
- ☆59Mar 9, 2023Updated 3 years ago
- [arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"☆220Apr 3, 2026Updated 2 weeks ago
- Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.☆707Apr 26, 2025Updated 11 months ago