ovguyo / captions-in-VQALinks
Using image captions with LLM for zero-shot VQA
☆17Updated last year
Alternatives and similar repositories for captions-in-VQA
Users that are interested in captions-in-VQA are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2023]DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models☆46Updated last year
- Implementation of our paper, Your Negative May not Be True Negative: Boosting Image-Text Matching with False Negative Elimination..☆20Updated last year
- Official Implementation for CVPR 2023 paper "Divide and Conquer: Answering Questions with Object Factorization and Compositional Reasonin…☆10Updated last year
- [TIP2023] The code of “Plug-and-Play Regulators for Image-Text Matching”☆33Updated last year
- [CVPR 2025] Augmenting Multimodal LLMs with Self-Reflective Tokens for Knowledge-based Visual Question Answering☆51Updated 3 months ago
- [Paper][AAAI2024]Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal Structured Representations☆151Updated last year
- Recent Advances in Visual Dialog☆30Updated 3 years ago
- Official implementation for the MM'22 paper.☆13Updated 3 years ago
- VQACL: A Novel Visual Question Answering Continual Learning Setting (CVPR'23)☆41Updated last year
- Source code of our AAAI 2024 paper "Cross-Modal and Uni-Modal Soft-Label Alignment for Image-Text Retrieval"☆48Updated last year
- Colorful Prompt Tuning for Pre-trained Vision-Language Models☆49Updated 3 years ago
- Official Code of IdealGPT☆35Updated 2 years ago
- [ACM MM 2024] Improving Composed Image Retrieval via Contrastive Learning with Scaling Positives and Negatives☆39Updated 2 months ago
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)☆73Updated 9 months ago
- Implementation of our paper, 'Unifying Two-Stream Encoders with Transformers for Cross-Modal Retrieval.'☆27Updated last year
- 【ICLR 2024, Spotlight】Sentence-level Prompts Benefit Composed Image Retrieval☆90Updated last year
- ☆30Updated 2 years ago
- Look, Compare, Decide: Alleviating Hallucination in Large Vision-Language Models via Multi-View Multi-Path Reasoning☆22Updated last year
- ☆29Updated 2 years ago
- Benchmark data for "Rethinking Benchmarks for Cross-modal Image-text Retrieval" (SIGIR 2023)☆25Updated 2 years ago
- The efficient tuning method for VLMs☆80Updated last year
- Code for paper: Unified Text-to-Image Generation and Retrieval☆15Updated last year
- (CVPR2024) MeaCap: Memory-Augmented Zero-shot Image Captioning☆51Updated last year
- (ICML 2024) Improve Context Understanding in Multimodal Large Language Models via Multimodal Composition Learning☆27Updated last year
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆149Updated last year
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆58Updated last year
- Noise of Web (NoW) is a challenging noisy correspondence learning (NCL) benchmark containing 100K image-text pairs for robust image-text …☆15Updated 2 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆54Updated last year
- visual question answering prompting recipes for large vision-language models☆28Updated last year
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 11 months ago