google-research / lm-extraction-benchmarkLinks
☆295Updated last month
Alternatives and similar repositories for lm-extraction-benchmark
Users that are interested in lm-extraction-benchmark are comparing it to the libraries listed below
Sorting:
- Training data extraction on GPT-2☆191Updated 2 years ago
- A codebase that makes differentially private training of transformers easy.☆176Updated 2 years ago
- Python package for measuring memorization in LLMs.☆166Updated 2 months ago
- Repository for research in the field of Responsible NLP at Meta.☆202Updated 4 months ago
- Differentially-private transformers using HuggingFace and Opacus☆142Updated last year
- Repo for arXiv preprint "Gradient-based Adversarial Attacks against Text Transformers"☆108Updated 2 years ago
- This repo contains the code for generating the ToxiGen dataset, published at ACL 2022.☆330Updated last year
- Source code of NAACL 2025 Findings "Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models"☆13Updated 7 months ago
- Official Repository for Dataset Inference for LLMs☆41Updated last year
- ☆13Updated 2 years ago
- A re-implementation of the "Extracting Training Data from Large Language Models" paper by Carlini et al., 2020☆37Updated 3 years ago
- Code for watermarking language models☆82Updated last year
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆89Updated last year
- The code and data for "Are Large Pre-Trained Language Models Leaking Your Personal Information?" (Findings of EMNLP '22)☆24Updated 2 years ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆111Updated 6 months ago
- A Comprehensive Assessment of Trustworthiness in GPT Models☆302Updated last year
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆82Updated last year
- Official repository for our NeurIPS 2023 paper "Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense…☆174Updated last year
- A fast, effective data attribution method for neural networks in PyTorch☆217Updated 10 months ago
- ☆140Updated 3 years ago
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word pred…☆100Updated last year
- Aligning AI With Shared Human Values (ICLR 2021)☆297Updated 2 years ago
- Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation" paper☆80Updated 4 years ago
- A survey of privacy problems in Large Language Models (LLMs). Contains summary of the corresponding paper along with relevant code☆67Updated last year
- ☆56Updated last year
- ☆21Updated 4 years ago
- ☆39Updated 2 years ago
- [EMNLP 2023] Poisoning Retrieval Corpora by Injecting Adversarial Passages https://arxiv.org/abs/2310.19156☆37Updated last year
- ACL 2022: An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models.☆146Updated last month
- The one-stop repository for large language model (LLM) unlearning. Supports TOFU, MUSE, WMDP, and many unlearning methods. All features: …☆368Updated 2 months ago