bowen-upenn / Multi-Agent-VQALinks
[CVPR 2024 CVinW] Multi-Agent VQA: Exploring Multi-Agent Foundation Models on Zero-Shot Visual Question Answering
☆15Updated 10 months ago
Alternatives and similar repositories for Multi-Agent-VQA
Users that are interested in Multi-Agent-VQA are comparing it to the libraries listed below
Sorting:
- Emerging Pixel Grounding in Large Multimodal Models Without Grounding Supervision☆41Updated 4 months ago
- Counterfactual Reasoning VQA Dataset☆25Updated last year
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆47Updated 7 months ago
- ☆23Updated last month
- Official Code of IdealGPT☆35Updated last year
- [ICML2024] Repo for the paper `Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models'☆21Updated 7 months ago
- ☆85Updated 7 months ago
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆70Updated last year
- Benchmarking Video-LLMs on Video Spatio-Temporal Reasoning☆27Updated last week
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆87Updated last year
- [ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆67Updated last year
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆86Updated last year
- HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data (Accepted by CVPR 2024)☆47Updated last year
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆55Updated 9 months ago
- Official implementation of "Automated Generation of Challenging Multiple-Choice Questions for Vision Language Model Evaluation" (CVPR 202…☆32Updated 2 months ago
- ☆99Updated last year
- Preference Learning for LLaVA☆48Updated 9 months ago
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆57Updated 9 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆65Updated 2 months ago
- ☆12Updated 7 months ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 8 months ago
- A Comprehensive Benchmark for Robust Multi-image Understanding☆12Updated 11 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆85Updated 10 months ago
- ☆45Updated 7 months ago
- [NeurIPS 2023]DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models☆44Updated last year
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆91Updated last year
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆46Updated 9 months ago
- [CVPR 2025] Augmenting Multimodal LLMs with Self-Reflective Tokens for Knowledge-based Visual Question Answering☆41Updated 3 weeks ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆77Updated last year
- [ECCV 2024] Learning Video Context as Interleaved Multimodal Sequences☆40Updated 5 months ago