qizhangli / Gradient-based-Jailbreak-AttacksLinks
Code for our NeurIPS 2024 paper Improved Generation of Adversarial Examples Against Safety-aligned LLMs
☆11Updated 7 months ago
Alternatives and similar repositories for Gradient-based-Jailbreak-Attacks
Users that are interested in Gradient-based-Jailbreak-Attacks are comparing it to the libraries listed below
Sorting:
- Implementation of the paper "Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing".☆11Updated last year
- ☆46Updated last year
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆22Updated 8 months ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆26Updated 7 months ago
- Official codebase for Image Hijacks: Adversarial Images can Control Generative Models at Runtime☆47Updated last year
- ☆22Updated last year
- ☆99Updated last year
- Repository for the Paper: Refusing Safe Prompts for Multi-modal Large Language Models☆17Updated 8 months ago
- Code for Findings-EMNLP 2023 paper: Multi-step Jailbreaking Privacy Attacks on ChatGPT☆33Updated last year
- ☆53Updated 2 years ago
- Github repo for NeurIPS 2024 paper "Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models"☆15Updated 8 months ago
- Official repository for the paper "Gradient-based Jailbreak Images for Multimodal Fusion Models" (https//arxiv.org/abs/2410.03489)☆17Updated 8 months ago
- Code and data to go with the Zhu et al. paper "An Objective for Nuanced LLM Jailbreaks"☆32Updated 6 months ago
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆26Updated last year
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆55Updated last year
- Code for ICLR 2025 Failures to Find Transferable Image Jailbreaks Between Vision-Language Models☆29Updated 3 weeks ago
- CVPR 2025 - Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models☆35Updated 3 months ago
- ☆44Updated 6 months ago
- All in How You Ask for It: Simple Black-Box Method for Jailbreak Attacks☆17Updated last year
- [ICML 2023] Protecting Language Generation Models via Invisible Watermarking☆14Updated last year
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆35Updated last year
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆29Updated 8 months ago
- ☆31Updated 2 years ago
- ☆20Updated 6 months ago
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆14Updated last month
- [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu☆26Updated 10 months ago
- Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Models☆28Updated 5 months ago
- SEAT☆20Updated last year
- ☆23Updated 2 years ago
- ☆21Updated 5 months ago