YunxinLi / Multimodal-Context-ReasoningLinks
A multimodal context reasoning approach that introduce the multi-view semantic alignment information via prefix tuning.
☆16Updated 2 years ago
Alternatives and similar repositories for Multimodal-Context-Reasoning
Users that are interested in Multimodal-Context-Reasoning are comparing it to the libraries listed below
Sorting:
- ☆84Updated last year
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆83Updated 11 months ago
- Code and model for AAAI 2024: UMIE: Unified Multimodal Information Extraction with Instruction Tuning☆41Updated last year
- ☆58Updated last year
- An Easy-to-use Hallucination Detection Framework for LLMs.☆61Updated last year
- ☆25Updated last year
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆45Updated last year
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆149Updated last year
- [ICLR 2025] ChartMimic: Evaluating LMM’s Cross-Modal Reasoning Capability via Chart-to-Code Generation☆124Updated 4 months ago
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆25Updated last year
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆140Updated last year
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆166Updated last year
- Official repo for "AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability"☆33Updated last year
- Recent advancements propelled by large language models (LLMs), encompassing an array of domains including Vision, Audio, Agent, Robotics,…☆123Updated 5 months ago
- AdaMoLE: Adaptive Mixture of LoRA Experts☆37Updated last year
- Code for "Small Models are Valuable Plug-ins for Large Language Models"☆131Updated 2 years ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆54Updated last year
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆226Updated 2 months ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆94Updated 2 months ago
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆98Updated last year
- ☆66Updated 2 years ago
- Official repository for the A-OKVQA dataset☆102Updated last year
- Adapt MLLMs to Domains via Post-Training (EMNLP 2025 Findings)☆11Updated 2 months ago
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆134Updated 2 years ago
- ChatBridge, an approach to learning a unified multimodal model to interpret, correlate, and reason about various modalities without rely…☆53Updated 2 years ago
- Multimodal Instruction Tuning with Conditional Mixture of LoRA (ACL 2024)☆31Updated last year
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆48Updated last year
- ☆41Updated 11 months ago
- [ACM MM 2023] The released code of paper "Deconfounded Visual Question Generation with Causal Inference"☆10Updated last year
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆44Updated 4 months ago