ShanglunFengatETHZ / PrivacyBackdoor
Privacy backdoors
☆41Updated 4 months ago
Related projects: ⓘ
- ☆22Updated this week
- Adversarial Attacks on GPT-4 via Simple Random Search [Dec 2023]☆41Updated 4 months ago
- Independent robustness evaluation of Improving Alignment and Robustness with Short Circuiting☆10Updated last month
- Official implementation of Goldfish Loss: Mitigating Memorization in Generative LLMs☆68Updated 2 months ago
- The official repository of the paper "On the Exploitability of Instruction Tuning".☆56Updated 7 months ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆59Updated 6 months ago
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆72Updated 4 months ago
- ☆47Updated last year
- ☆30Updated last year
- ☆30Updated last year
- ☆57Updated last week
- Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.☆100Updated 3 months ago
- ☆34Updated last year
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆76Updated 3 weeks ago
- [Arxiv 2024] Adversarial attacks on multimodal agents☆33Updated 2 months ago
- ☆23Updated 4 months ago
- Python package for measuring memorization in LLMs.☆107Updated this week
- ☆12Updated 4 months ago
- Does Refusal Training in LLMs Generalize to the Past Tense? [arXiv, July 2024]☆49Updated 2 months ago
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆14Updated 2 weeks ago
- Official implementation for "AgentPoison: Red-teaming LLM Agents via Memory or Knowledge Base Backdoor Poisoning"☆36Updated last month
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆27Updated 5 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"