Accept by CVPR 2025 (highlight)
☆22Jun 8, 2025Updated 8 months ago
Alternatives and similar repositories for CS-DJ
Users that are interested in CS-DJ are comparing it to the libraries listed below
Sorting:
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆35Oct 23, 2024Updated last year
- [CVPR 2025] Official implementation for JOOD "Playing the Fool: Jailbreaking LLMs and Multimodal LLMs with Out-of-Distribution Strategy"☆21Jun 11, 2025Updated 8 months ago
- Code repository for the paper "Heuristic Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models"☆15Aug 7, 2025Updated 6 months ago
- Code for the paper "Jailbreak Large Vision-Language Models Through Multi-Modal Linkage"☆27Dec 6, 2024Updated last year
- ☆59Jun 5, 2024Updated last year
- [ICLR 2025] BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks☆30Nov 2, 2025Updated 3 months ago
- The repo for paper: Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models.☆13Dec 16, 2024Updated last year
- Code for Fast Propagation is Better: Accelerating Single-Step Adversarial Training via Sampling Subnetworks (TIFS2024)☆13Mar 29, 2024Updated last year
- Prompt Generator model for Stable Diffusion Models☆11Jun 20, 2023Updated 2 years ago
- Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Models☆31Dec 30, 2024Updated last year
- Data and code for the paper: Finding Safety Neurons in Large Language Models☆21Jan 29, 2026Updated last month
- Code for the paper Boosting Accuracy and Robustness of Student Models via Adaptive Adversarial Distillation (CVPR 2023).☆34May 26, 2023Updated 2 years ago
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆22May 6, 2025Updated 9 months ago
- ☆29Dec 14, 2025Updated 2 months ago
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM☆39Jan 17, 2025Updated last year
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆191Jun 26, 2025Updated 8 months ago
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆27Jun 11, 2025Updated 8 months ago
- ☆28Mar 20, 2024Updated last year
- Accepted by ECCV 2024☆188Oct 15, 2024Updated last year
- The official repository for guided jailbreak benchmark☆28Jul 28, 2025Updated 7 months ago
- Official Code and data for ACL 2024 finding, "An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language Models"☆25Nov 10, 2024Updated last year
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆23Jul 26, 2024Updated last year
- Hyperbolic Safety-Aware Vision-Language Models. CVPR 2025☆31Apr 8, 2025Updated 10 months ago
- A Unified Benchmark and Toolbox for Multimodal Jailbreak Attack–Defense Evaluation☆58Jan 23, 2026Updated last month
- A Survey on Jailbreak Attacks and Defenses against Multimodal Generative Models☆304Jan 11, 2026Updated last month
- [ICCVW 2025 (Oral)] Robust-LLaVA: On the Effectiveness of Large-Scale Robust Image Encoders for Multi-modal Large Language Models☆28Oct 20, 2025Updated 4 months ago
- Attack AlphaZero Go agents (NeurIPS 2022)☆22Dec 3, 2022Updated 3 years ago
- Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment (NeurIPS 2025)☆48Nov 5, 2025Updated 3 months ago
- ☆72Mar 30, 2025Updated 11 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆69Oct 23, 2024Updated last year
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆66Aug 25, 2024Updated last year
- ACL 2025 (Main) HiddenDetect: Detecting Jailbreak Attacks against Multimodal Large Language Models via Monitoring Hidden States☆159Jun 8, 2025Updated 8 months ago
- [NeurIPS25 & ICML25 Workshop on Reliable and Responsible Foundation Models] A Simple Baseline Achieving Over 90% Success Rate Against the…☆88Feb 3, 2026Updated 3 weeks ago
- [ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal…☆79Jun 6, 2024Updated last year
- ☆39May 17, 2025Updated 9 months ago
- Coursework for Mathematics for Machine Learning (70015) at Imperial College London☆10Nov 12, 2024Updated last year
- Code for Boosting fast adversarial training with learnable adversarial initialization (TIP2022)☆29Aug 22, 2023Updated 2 years ago
- Code and data to go with the Zhu et al. paper "An Objective for Nuanced LLM Jailbreaks"☆36Dec 18, 2024Updated last year
- [ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.☆85Jan 19, 2025Updated last year