bowen-upenn / Multi-Agent-VQALinks
[CVPR 2024 CVinW] Multi-Agent VQA: Exploring Multi-Agent Foundation Models on Zero-Shot Visual Question Answering
☆20Updated last year
Alternatives and similar repositories for Multi-Agent-VQA
Users that are interested in Multi-Agent-VQA are comparing it to the libraries listed below
Sorting:
- Emergent Visual Grounding in Large Multimodal Models Without Grounding Supervision☆41Updated last month
- The efficient tuning method for VLMs☆80Updated last year
- [NeurIPS 2023]DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models☆49Updated last year
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆89Updated last year
- [ICML2024] Repo for the paper `Evaluating and Analyzing Relationship Hallucinations in Large Vision-Language Models'☆22Updated 11 months ago
- ☆24Updated 5 months ago
- Counterfactual Reasoning VQA Dataset☆27Updated 2 years ago
- [EMNLP-2025 Oral] ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆62Updated 2 weeks ago
- (NeurIPS 2024) What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights☆29Updated last year
- Official Code of IdealGPT☆35Updated 2 years ago
- Evaluation and dataset construction code for the CVPR 2025 paper "Vision-Language Models Do Not Understand Negation"☆41Updated 7 months ago
- [ICCV 2023] ViLLA: Fine-grained vision-language representation learning from real-world data☆46Updated 2 years ago
- [CVPR 2024] The official implementation of paper "synthesize, diagnose, and optimize: towards fine-grained vision-language understanding"