YunxinLi / Multimodal-Context-ReasoningLinks
A multimodal context reasoning approach that introduce the multi-view semantic alignment information via prefix tuning.
☆16Updated 2 years ago
Alternatives and similar repositories for Multimodal-Context-Reasoning
Users that are interested in Multimodal-Context-Reasoning are comparing it to the libraries listed below
Sorting:
- Official repo for "AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability"☆34Updated last year
- An Easy-to-use Hallucination Detection Framework for LLMs.☆61Updated last year
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆83Updated last year
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆56Updated last year
- [ICLR 2025] ChartMimic: Evaluating LMM’s Cross-Modal Reasoning Capability via Chart-to-Code Generation☆129Updated last week
- ☆85Updated last year
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆233Updated 3 months ago
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆145Updated last year
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆155Updated last year
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆169Updated last year
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆46Updated 2 years ago
- Official implementation of Leveraging Visual Tokens for Extended Text Contexts in Multi-Modal Learning☆26Updated last year
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆25Updated last year
- FaithScore: Fine-grained Evaluations of Hallucinations in Large Vision-Language Models☆31Updated 2 weeks ago
- ☆67Updated 2 years ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆134Updated 2 years ago
- ChatBridge, an approach to learning a unified multimodal model to interpret, correlate, and reason about various modalities without rely…☆54Updated 2 years ago
- ☆60Updated last year
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆63Updated last year
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆49Updated last year
- Official repository for the A-OKVQA dataset☆104Updated last year
- Code for "Small Models are Valuable Plug-ins for Large Language Models"☆132Updated 2 years ago
- 😎 curated list of awesome LMM hallucinations papers, methods & resources.☆150Updated last year
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆92Updated last year
- ☆42Updated last year
- ☆15Updated 7 months ago
- a multimodal retrieval dataset☆24Updated 2 years ago
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆98Updated last year
- Large Language Models Can Self-Improve in Long-context Reasoning☆73Updated last year
- Code and model for AAAI 2024: UMIE: Unified Multimodal Information Extraction with Instruction Tuning☆45Updated last year