shreyansh26 / Extracting-Training-Data-from-Large-Langauge-Models
A re-implementation of the "Extracting Training Data from Large Language Models" paper by Carlini et al., 2020
☆35Updated 2 years ago
Alternatives and similar repositories for Extracting-Training-Data-from-Large-Langauge-Models:
Users that are interested in Extracting-Training-Data-from-Large-Langauge-Models are comparing it to the libraries listed below
- Training data extraction on GPT-2☆179Updated 2 years ago
- The code and data for "Are Large Pre-Trained Language Models Leaking Your Personal Information?" (Findings of EMNLP '22)☆18Updated 2 years ago
- ☆31Updated last year
- ☆11Updated 2 years ago
- ☆18Updated 3 years ago
- Repo for arXiv preprint "Gradient-based Adversarial Attacks against Text Transformers"☆102Updated 2 years ago
- ☆53Updated 8 months ago
- Implementation of the paper "Exploring the Universal Vulnerability of Prompt-based Learning Paradigm" on Findings of NAACL 2022☆29Updated 2 years ago
- ☆27Updated 4 years ago
- Official Repository for Dataset Inference for LLMs☆31Updated 6 months ago
- ☆9Updated 3 years ago
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆35Updated 3 weeks ago
- ☆31Updated last year
- ☆52Updated last year
- Code for the paper "Weight Poisoning Attacks on Pre-trained Models" (ACL 2020)☆140Updated 3 years ago
- A survey of privacy problems in Large Language Models (LLMs). Contains summary of the corresponding paper along with relevant code☆65Updated 8 months ago
- ☆9Updated 4 years ago
- The repository contains the code for analysing the leakage of personally identifiable (PII) information from the output of next word pred…☆87Updated 6 months ago
- ☆279Updated this week
- The official repository of the paper "On the Exploitability of Instruction Tuning".☆58Updated last year
- ☆6Updated 2 years ago
- Official Code for ACL 2023 paper: "Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confid…☆22Updated last year
- ☆41Updated last week
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆84Updated 8 months ago
- Code for the paper "Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models" (NAACL-…☆39Updated 3 years ago
- A curated list of trustworthy Generative AI papers. Daily updating...☆68Updated 5 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆92Updated this week
- ☆23Updated last year
- TextHide: Tackling Data Privacy in Language Understanding Tasks☆31Updated 3 years ago
- Official implementation of Privacy Implications of Retrieval-Based Language Models (EMNLP 2023). https://arxiv.org/abs/2305.14888☆35Updated 8 months ago