ovguyo / captions-in-VQALinks
Using image captions with LLM for zero-shot VQA
☆18Updated last year
Alternatives and similar repositories for captions-in-VQA
Users that are interested in captions-in-VQA are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2023]DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models☆44Updated last year
- [Paper][AAAI2024]Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal Structured Representations☆146Updated last year
- VQACL: A Novel Visual Question Answering Continual Learning Setting (CVPR'23)☆40Updated last year
- Official Implementation for CVPR 2023 paper "Divide and Conquer: Answering Questions with Object Factorization and Compositional Reasonin…☆10Updated last year
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆54Updated last year
- Recent Advances in Visual Dialog☆30Updated 2 years ago
- Colorful Prompt Tuning for Pre-trained Vision-Language Models☆49Updated 2 years ago
- The efficient tuning method for VLMs☆80Updated last year
- [TIP2023] The code of “Plug-and-Play Regulators for Image-Text Matching”☆33Updated last year
- Implementation of our paper, 'Unifying Two-Stream Encoders with Transformers for Cross-Modal Retrieval.'☆25Updated last year
- 【ICLR 2024, Spotlight】Sentence-level Prompts Benefit Composed Image Retrieval☆85Updated last year
- [CVPR 2024 CVinW] Multi-Agent VQA: Exploring Multi-Agent Foundation Models on Zero-Shot Visual Question Answering☆15Updated 10 months ago
- [ICCV2023] Tem-adapter: Adapting Image-Text Pretraining for Video Question Answer☆37Updated last year
- Official implementation for the MM'22 paper.☆13Updated 3 years ago
- [COLING'25] HGCLIP: Exploring Vision-Language Models with Graph Representations for Hierarchical Understanding☆42Updated 8 months ago
- Implementation of our paper, Your Negative May not Be True Negative: Boosting Image-Text Matching with False Negative Elimination..☆19Updated last year
- ☆27Updated 2 years ago
- ☆19Updated last year
- Look, Compare, Decide: Alleviating Hallucination in Large Vision-Language Models via Multi-View Multi-Path Reasoning☆22Updated 11 months ago
- Source code of our AAAI 2024 paper "Cross-Modal and Uni-Modal Soft-Label Alignment for Image-Text Retrieval"☆46Updated last year
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)☆74Updated 6 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆49Updated 3 weeks ago
- (CVPR2024) MeaCap: Memory-Augmented Zero-shot Image Captioning☆49Updated 11 months ago
- Learning Situation Hyper-Graphs for Video Question Answering☆22Updated last year
- ☆16Updated 3 years ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆147Updated last year
- ☆12Updated 7 months ago
- Multimodal Instruction Tuning with Conditional Mixture of LoRA (ACL 2024)☆30Updated last year
- Benchmark data for "Rethinking Benchmarks for Cross-modal Image-text Retrieval" (SIGIR 2023)☆25Updated 2 years ago
- Code for paper: Unified Text-to-Image Generation and Retrieval☆15Updated last year