YunxinLi / Multimodal-Context-ReasoningLinks
A multimodal context reasoning approach that introduce the multi-view semantic alignment information via prefix tuning.
☆16Updated last year
Alternatives and similar repositories for Multimodal-Context-Reasoning
Users that are interested in Multimodal-Context-Reasoning are comparing it to the libraries listed below
Sorting:
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆80Updated 9 months ago
- An Easy-to-use Hallucination Detection Framework for LLMs.☆60Updated last year
- ☆78Updated last year
- Official repo for "AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability"☆33Updated last year
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆147Updated last year
- ☆41Updated 8 months ago
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆25Updated last year
- ☆56Updated 9 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆55Updated 9 months ago
- Official repository for the A-OKVQA dataset☆96Updated last year
- [ACM MM 2023] The released code of paper "Deconfounded Visual Question Generation with Causal Inference"☆10Updated 11 months ago
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆149Updated last year
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆46Updated last year
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆96Updated last year
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆217Updated last year
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆135Updated 2 years ago
- [ICLR 2025] ChartMimic: Evaluating LMM’s Cross-Modal Reasoning Capability via Chart-to-Code Generation☆117Updated last month
- Code for "Small Models are Valuable Plug-ins for Large Language Models"☆131Updated 2 years ago
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆51Updated last year
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆70Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆87Updated last year
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆90Updated last year
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆86Updated last year
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆42Updated last month
- This is the official repository for Retrieval Augmented Visual Question Answering☆232Updated 7 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆158Updated 10 months ago
- A Self-Training Framework for Vision-Language Reasoning☆80Updated 6 months ago
- Official Pytorch implementation of "Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations" (ICLR '25)☆79Updated 2 months ago
- This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-vi…☆110Updated last month