mireshghallah / ft-memorization
☆12Updated 2 years ago
Alternatives and similar repositories for ft-memorization:
Users that are interested in ft-memorization are comparing it to the libraries listed below
- ☆32Updated last year
- ☆39Updated last year
- Official implementation of Privacy Implications of Retrieval-Based Language Models (EMNLP 2023). https://arxiv.org/abs/2305.14888☆35Updated 7 months ago
- Official Repository for Dataset Inference for LLMs☆27Updated 5 months ago
- ☆18Updated 3 years ago
- 🤫 Code and benchmark for our ICLR 2024 spotlight paper: "Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Con…☆36Updated last year
- Implementation of the paper "Exploring the Universal Vulnerability of Prompt-based Learning Paradigm" on Findings of NAACL 2022☆29Updated 2 years ago
- ☆21Updated last year
- Official Code for ACL 2023 paper: "Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confid…☆23Updated last year
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆32Updated 3 months ago
- ☆9Updated 4 years ago
- [ICLR'24 Spotlight] DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer☆35Updated 7 months ago
- Official repo for the paper: Recovering Private Text in Federated Learning of Language Models (in NeurIPS 2022)☆57Updated last year
- ☆23Updated last year
- The code and data for "Are Large Pre-Trained Language Models Leaking Your Personal Information?" (Findings of EMNLP '22)☆19Updated 2 years ago
- ☆41Updated last year
- ☆5Updated 7 months ago
- ☆31Updated last year
- ☆51Updated last year
- Official code implementation of SKU, Accepted by ACL 2024 Findings☆13Updated last month
- ☆30Updated last month
- [ICLR 2024] Provable Robust Watermarking for AI-Generated Text☆28Updated last year
- [EMNLP 2022] Distillation-Resistant Watermarking (DRW) for Model Protection in NLP☆12Updated last year
- ☆13Updated 2 years ago
- ☆52Updated 7 months ago
- Data Valuation on In-Context Examples (ACL23)☆23Updated last week
- Min-K%++: Improved baseline for detecting pre-training data of LLMs https://arxiv.org/abs/2404.02936☆30Updated 7 months ago
- Code for paper - On Diversified Preferences of Large Language Model Alignment☆15Updated 5 months ago
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆35Updated 2 months ago
- ☆21Updated last year