computationalprivacy / mia_llms_benchmarkLinks
Benchmarking MIAs against LLMs.
☆19Updated 9 months ago
Alternatives and similar repositories for mia_llms_benchmark
Users that are interested in mia_llms_benchmark are comparing it to the libraries listed below
Sorting:
- Source code of NAACL 2025 Findings "Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models"☆11Updated 5 months ago
- A toolkit to assess data privacy in LLMs (under development)☆59Updated 6 months ago
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆82Updated 10 months ago
- A codebase that makes differentially private training of transformers easy.☆175Updated 2 years ago
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆90Updated last year
- ☆55Updated 2 years ago
- Official Repository for Dataset Inference for LLMs☆35Updated 11 months ago
- ☆38Updated last year
- Python package for measuring memorization in LLMs.☆160Updated this week
- ☆74Updated 3 years ago
- Official repo for the paper: Recovering Private Text in Federated Learning of Language Models (in NeurIPS 2022)☆57Updated 2 years ago
- Training data extraction on GPT-2☆188Updated 2 years ago
- ☆35Updated 6 months ago
- ☆13Updated 2 years ago
- Private Adaptive Optimization with Side Information (ICML '22)☆16Updated 3 years ago
- ☆44Updated 5 months ago
- Code for watermarking language models☆79Updated 10 months ago
- Differentially-private transformers using HuggingFace and Opacus☆140Updated 10 months ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆101Updated 4 months ago
- ☆18Updated 3 years ago
- [EMNLP 2023] Poisoning Retrieval Corpora by Injecting Adversarial Passages https://arxiv.org/abs/2310.19156☆34Updated last year
- Code for the WWW'23 paper "Sanitizing Sentence Embeddings (and Labels) for Local Differential Privacy"☆12Updated 2 years ago
- Implementation of the paper "Exploring the Universal Vulnerability of Prompt-based Learning Paradigm" on Findings of NAACL 2022☆29Updated 3 years ago
- ☆22Updated 4 months ago
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆71Updated 9 months ago
- LAMP: Extracting Text from Gradients with Language Model Priors (NeurIPS '22)☆25Updated last month
- Starter kit and data loading code for the Trojan Detection Challenge NeurIPS 2022 competition☆33Updated last year
- ☆70Updated 3 years ago
- Official repository for "PostMark: A Robust Blackbox Watermark for Large Language Models"☆27Updated 10 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆74Updated 4 months ago