sinwang20 / SIUO
[NAACL 2025] SIUO: Cross-Modality Safety Alignment
☆23Updated last month
Alternatives and similar repositories for SIUO:
Users that are interested in SIUO are comparing it to the libraries listed below
- ECSO (Make MLLM safe without neither training nor any external models!) (https://arxiv.org/abs/2403.09572)☆22Updated 4 months ago
- [ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.☆57Updated last month
- ☆31Updated 3 months ago
- The reinforcement learning codes for dataset SPA-VL☆31Updated 8 months ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆77Updated last year
- [ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal…☆41Updated 9 months ago
- Accepted by ECCV 2024☆109Updated 4 months ago
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆52Updated 8 months ago
- ☆45Updated 3 months ago
- VHTest☆13Updated 4 months ago
- up-to-date curated list of state-of-the-art Large vision language models hallucinations research work, papers & resources☆101Updated 2 weeks ago
- [ACL 2024] Logical Closed Loop: Uncovering Object Hallucinations in Large Vision-Language Models. Detect and mitigate object hallucinatio…☆20Updated last month
- ☆30Updated 5 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆24Updated 4 months ago
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"☆81Updated 3 months ago
- The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?☆26Updated 4 months ago
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆44Updated 7 months ago
- ☆23Updated 4 months ago
- ☆42Updated 7 months ago
- The official code of the paper "Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration Rate".☆96Updated 3 months ago
- ☆31Updated 7 months ago
- The official repository for paper "MLLM-Protector: Ensuring MLLM’s Safety without Hurting Performance"☆34Updated 10 months ago
- the official repo for EMNLP 2024 (main) paper "EFUF: Efficient Fine-grained Unlearning Framework for Mitigating Hallucinations in Multimo…☆19Updated 5 months ago
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆46Updated 2 months ago
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆81Updated 3 months ago
- ☆64Updated 7 months ago
- ☆38Updated 3 months ago
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆149Updated 11 months ago