ShenzheZhu / JailDAMLinks
[COLM 2025] JailDAM: Jailbreak Detection with Adaptive Memory for Vision-Language Model
☆14Updated last week
Alternatives and similar repositories for JailDAM
Users that are interested in JailDAM are comparing it to the libraries listed below
Sorting:
- [CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Prompt…☆44Updated 6 months ago
- Official PyTorch implementation of "CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning" @ ICCV 2023☆36Updated last year
- ☆26Updated 3 months ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆81Updated last year
- [ICLR 2025] Official codebase for the ICLR 2025 paper "Multimodal Situational Safety"☆20Updated 3 weeks ago
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆13Updated last year
- [ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.☆74Updated 6 months ago
- The official repository for paper "MLLM-Protector: Ensuring MLLM’s Safety without Hurting Performance"☆37Updated last year
- Röttger et al. (2025): "MSTS: A Multimodal Safety Test Suite for Vision-Language Models"☆14Updated 3 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆75Updated 5 months ago
- [ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models☆135Updated last month
- ECSO (Make MLLM safe without neither training nor any external models!) (https://arxiv.org/abs/2403.09572)☆28Updated 8 months ago
- [ICLR 2024 Oral] Less is More: Fewer Interpretable Region via Submodular Subset Selection☆78Updated last month
- The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?☆31Updated 8 months ago
- ☆57Updated 8 months ago
- ☆45Updated last month
- ☆11Updated this week
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆51Updated 6 months ago
- ☆27Updated last year
- AutoHallusion Codebase (EMNLP 2024)☆19Updated 7 months ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆45Updated last year
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆61Updated last year
- VHTest☆13Updated 8 months ago
- A Task of Fictitious Unlearning for VLMs☆19Updated 3 months ago
- ☆17Updated 7 months ago
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆55Updated last year
- [ICML2023] Revisiting Data-Free Knowledge Distillation with Poisoned Teachers☆23Updated last year
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆42Updated 8 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆76Updated last year
- ☆18Updated last year