YunxinLi / Multimodal-Context-ReasoningLinks
A multimodal context reasoning approach that introduce the multi-view semantic alignment information via prefix tuning.
☆16Updated 2 years ago
Alternatives and similar repositories for Multimodal-Context-Reasoning
Users that are interested in Multimodal-Context-Reasoning are comparing it to the libraries listed below
Sorting:
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆25Updated last year
- Official repo for "AlignGPT: Multi-modal Large Language Models with Adaptive Alignment Capability"☆34Updated last year
- This repository contains code to evaluate various multimodal large language models using different instructions across multiple multimoda…☆31Updated 9 months ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆156Updated last year
- Code and model for AAAI 2024: UMIE: Unified Multimodal Information Extraction with Instruction Tuning☆45Updated last year
- Official repository for the A-OKVQA dataset☆106Updated last year
- ☆87Updated last year
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆84Updated last year
- MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning☆134Updated 2 years ago
- ☆42Updated last year
- An Easy-to-use Hallucination Detection Framework for LLMs.☆63Updated last year
- Recent Advances in Visual Dialog☆30Updated 3 years ago
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆53Updated last year
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆58Updated last year
- ChatBridge, an approach to learning a unified multimodal model to interpret, correlate, and reason about various modalities without rely…☆54Updated 2 years ago
- Official implementation of Leveraging Visual Tokens for Extended Text Contexts in Multi-Modal Learning☆28Updated last year
- ☆60Updated last year
- [ACM MM 2023] The released code of paper "Deconfounded Visual Question Generation with Causal Inference"☆11Updated last year
- Code for "Small Models are Valuable Plug-ins for Large Language Models"☆132Updated 2 years ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆46Updated 2 years ago
- ☆27Updated last year
- Official Code of IdealGPT☆35Updated 2 years ago
- About Codes for ACL 2023 paper: Exploiting! Multimodal Relation Extraction with Feature Denoising and Multimodal Topic Modeling.☆20Updated last year
- Code for our EMNLP-2022 paper: "Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQA"☆40Updated 3 years ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆175Updated last year
- Research code for "KAT: A Knowledge Augmented Transformer for Vision-and-Language"☆69Updated 3 years ago
- my commonly-used tools☆63Updated last year
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆98Updated last year
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆91Updated last year
- [EMNLP'2023 Findings] MoqaGPT, for zero-shot multimodal question answering with LLMs☆13Updated last year