ThuCCSLab / MergeGuard
[CCS-LAMPS'24] LLM IP Protection Against Model Merging
☆14Updated 7 months ago
Alternatives and similar repositories for MergeGuard
Users that are interested in MergeGuard are comparing it to the libraries listed below
Sorting:
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆26Updated 5 months ago
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆53Updated last year
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆18Updated 2 years ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆17Updated 9 months ago
- ☆20Updated 5 months ago
- ☆12Updated last year
- [NeurIPS 2024] Fight Back Against Jailbreaking via Prompt Adversarial Tuning☆9Updated 6 months ago
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆36Updated 10 months ago
- ☆18Updated last month
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆22Updated last year
- ☆1Updated 11 months ago
- Code and data for paper "Can LLM Watermarks Robustly Prevent Unauthorized Knowledge Distillation?".☆14Updated 2 months ago
- Code and data for paper "A Semantic Invariant Robust Watermark for Large Language Models" accepted by ICLR 2024.☆30Updated 6 months ago
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆39Updated 6 months ago
- Code and data to go with the Zhu et al. paper "An Objective for Nuanced LLM Jailbreaks"☆29Updated 4 months ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆36Updated last year
- ☆12Updated 3 years ago
- "In-Context Unlearning: Language Models as Few Shot Unlearners". Martin Pawelczyk, Seth Neel* and Himabindu Lakkaraju*; ICML 2024.☆26Updated last year
- ☆21Updated 2 months ago
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆12Updated last week
- ☆18Updated 7 months ago
- Official repository for 'Safety Challenges in Large Reasoning Models: A Survey' - Exploring safety risks, attacks, and defenses for Large…☆27Updated last week
- Github repo for NeurIPS 2024 paper "Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models"☆15Updated 7 months ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆18Updated 2 months ago
- This is the official code for the paper "Safety Tax: Safety Alignment Makes Your Large Reasoning Models Less Reasonable".☆16Updated 2 months ago
- ☆20Updated last year
- ☆40Updated 11 months ago
- ☆13Updated last month
- ☆41Updated last month
- [ICLR24] Official Repo of BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models☆34Updated 9 months ago