parameterlab / mia-scalingLinks
Source code of NAACL 2025 Findings "Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models"
☆13Updated 7 months ago
Alternatives and similar repositories for mia-scaling
Users that are interested in mia-scaling are comparing it to the libraries listed below
Sorting:
- Official Repository for Dataset Inference for LLMs☆41Updated last year
- ☆13Updated 2 years ago
- Code for the WWW'23 paper "Sanitizing Sentence Embeddings (and Labels) for Local Differential Privacy"☆12Updated 2 years ago
- Benchmarking MIAs against LLMs.☆22Updated 11 months ago
- Training data extraction on GPT-2☆191Updated 2 years ago
- The code and data for "Are Large Pre-Trained Language Models Leaking Your Personal Information?" (Findings of EMNLP '22)☆24Updated 2 years ago
- Repo for arXiv preprint "Gradient-based Adversarial Attacks against Text Transformers"☆108Updated 2 years ago
- ☆43Updated 2 years ago
- Python package for measuring memorization in LLMs.☆166Updated 2 months ago
- Official repo for the paper: Recovering Private Text in Federated Learning of Language Models (in NeurIPS 2022)☆59Updated 2 years ago
- ☆75Updated 3 years ago
- Code for watermarking language models☆82Updated last year
- ☆45Updated 7 months ago
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆89Updated last year
- ☆21Updated 4 years ago
- Implementation of the paper "Exploring the Universal Vulnerability of Prompt-based Learning Paradigm" on Findings of NAACL 2022☆30Updated 3 years ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆86Updated last year
- ☆37Updated 9 months ago
- ☆27Updated last year
- A Synthetic Dataset for Personal Attribute Inference (NeurIPS'24 D&B)☆44Updated last month
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆82Updated last year
- Private Adaptive Optimization with Side Information (ICML '22)☆16Updated 3 years ago
- ☆56Updated last year
- [ICLR'24 Spotlight] DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer☆46Updated last year
- 🤫 Code and benchmark for our ICLR 2024 spotlight paper: "Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Con…☆44Updated last year
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆59Updated last year
- ☆295Updated last month
- ☆22Updated 2 years ago
- A codebase that makes differentially private training of transformers easy.☆176Updated 2 years ago
- Differentially-private transformers using HuggingFace and Opacus☆142Updated last year