shikiw / Modality-Integration-Rate
The official code of the paper "Deciphering Cross-Modal Alignment in Large Vision-Language Models with Modality Integration Rate".
β78Updated 2 weeks ago
Related projects β
Alternatives and complementary repositories for Modality-Integration-Rate
- HallE-Control: Controlling Object Hallucination in LMMsβ28Updated 7 months ago
- π up-to-date & curated list of awesome LMM hallucinations papers, methods & resources.β144Updated 7 months ago
- [CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Alloβ¦β286Updated 2 months ago
- code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"β45Updated 2 months ago
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)β55Updated 2 months ago
- [ICML 2024] Official implementation for "HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding"β70Updated 5 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMsβ64Updated this week
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Modelsβ45Updated last month
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"β147Updated last month
- β42Updated last month
- β25Updated last month
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignmentβ48Updated last month
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"β67Updated 11 months ago
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decodingβ207Updated last month
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shiβ¦β42Updated 4 months ago
- Official repository of MMDU datasetβ74Updated last month
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)β31Updated 2 weeks ago
- List of T2I safety papers, updated daily, welcome to discuss using Discussionsβ42Updated 3 months ago
- Accepted by ECCV 2024β73Updated 3 weeks ago
- [ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Modelsβ98Updated this week
- Official implement of MIA-DPOβ32Updated last week
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''β72Updated 7 months ago
- Making LLaVA Tiny via MoE-Knowledge Distillationβ55Updated 2 weeks ago
- A RLHF Infrastructure for Vision-Language Modelsβ98Updated 5 months ago
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(β¦β242Updated last week
- The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?β18Updated last week
- A package that achieves 95%+ transfer attack success rate against GPT-4β12Updated 2 weeks ago
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimizationβ65Updated 9 months ago
- β31Updated 9 months ago
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding stratβ¦β71Updated 7 months ago