ThuCCSLab / misalignmentLinks
[NDSS'25] The official implementation of safety misalignment.
☆16Updated 8 months ago
Alternatives and similar repositories for misalignment
Users that are interested in misalignment are comparing it to the libraries listed below
Sorting:
- ☆35Updated 11 months ago
- ☆50Updated last year
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆63Updated 9 months ago
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆17Updated last year
- ☆18Updated 3 years ago
- ☆25Updated 2 years ago
- ☆13Updated last year
- Comprehensive Assessment of Trustworthiness in Multimodal Foundation Models☆22Updated 6 months ago
- ☆35Updated 4 months ago
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆18Updated 4 months ago
- Reconstructive Neuron Pruning for Backdoor Defense (ICML 2023)☆39Updated last year
- ☆30Updated last year
- [EMNLP 24] Official Implementation of CLEANGEN: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models☆16Updated 6 months ago
- ☆27Updated 2 years ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Updated 2 months ago
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆19Updated 2 years ago
- ☆20Updated last year
- [USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models☆45Updated 8 months ago
- Fingerprint large language models☆41Updated last year
- A toolbox for backdoor attacks.☆22Updated 2 years ago
- [ICLR 2024] Towards Elminating Hard Label Constraints in Gradient Inverision Attacks☆13Updated last year
- Repository for Towards Codable Watermarking for Large Language Models☆38Updated 2 years ago
- ☆25Updated 2 years ago
- ☆14Updated this week
- [CCS-LAMPS'24] LLM IP Protection Against Model Merging☆15Updated 11 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆28Updated 10 months ago
- ☆25Updated last year
- ☆31Updated 5 months ago
- [NeurIPS 2024] Fight Back Against Jailbreaking via Prompt Adversarial Tuning☆10Updated 10 months ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆17Updated last year