GraphPKU / CoILinks
Chain of Images for Intuitively Reasoning
☆10Updated last year
Alternatives and similar repositories for CoI
Users that are interested in CoI are comparing it to the libraries listed below
Sorting:
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆70Updated last year
 - Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆48Updated last year
 - [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆88Updated last year
 - Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"☆59Updated last year
 - [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆80Updated last week
 - ☆24Updated 4 months ago
 - ☆100Updated last year
 - Codes for ReFocus: Visual Editing as a Chain of Thought for Structured Image Understanding [ICML 2025]]☆40Updated 3 months ago
 - Counterfactual Reasoning VQA Dataset☆25Updated last year
 - The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆83Updated 9 months ago
 - Official Code of IdealGPT☆35Updated 2 years ago
 - Official implementation of "Automated Generation of Challenging Multiple-Choice Questions for Vision Language Model Evaluation" (CVPR 202…☆37Updated 5 months ago
 - [NeurIPS 2025] Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models☆47Updated last month
 - ☆45Updated 10 months ago
 - Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆54Updated last year
 - ☆55Updated last year
 - Official Code for ACL 2023 Outstanding Paper: World-to-Words: Grounded Open Vocabulary Acquisition through Fast Mapping in Vision-Languag…☆33Updated 2 years ago
 - V1: Toward Multimodal Reasoning by Designing Auxiliary Task☆36Updated 6 months ago
 - ☆43Updated last year
 - DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆65Updated 4 months ago
 - Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆77Updated last month
 - FaithScore: Fine-grained Evaluations of Hallucinations in Large Vision-Language Models☆30Updated 7 months ago
 - Github repository for "Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas" (ICML 2025)☆51Updated 6 months ago
 - [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆83Updated 11 months ago
 - code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"☆59Updated last year
 - VisualToolAgent (VisTA): A Reinforcement Learning Framework for Visual Tool Selection☆19Updated 5 months ago
 - [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆59Updated 11 months ago
 - [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆88Updated last year
 - An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆45Updated last year
 - OpenReivew Submission Visualization (ICLR 2024/2025)☆151Updated last year