ovguyo / captions-in-VQALinks
Using image captions with LLM for zero-shot VQA
☆18Updated last year
Alternatives and similar repositories for captions-in-VQA
Users that are interested in captions-in-VQA are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2023]DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models☆44Updated last year
- VQACL: A Novel Visual Question Answering Continual Learning Setting (CVPR'23)☆40Updated last year
- [Paper][AAAI2024]Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal Structured Representations☆148Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆49Updated last month
- Recent Advances in Visual Dialog☆30Updated 3 years ago
- Implementation of our paper, Your Negative May not Be True Negative: Boosting Image-Text Matching with False Negative Elimination..☆19Updated last year
- Official Implementation for CVPR 2023 paper "Divide and Conquer: Answering Questions with Object Factorization and Compositional Reasonin…☆10Updated last year
- Colorful Prompt Tuning for Pre-trained Vision-Language Models☆49Updated 2 years ago
- Official Code of IdealGPT☆35Updated last year
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆55Updated last year
- [TIP2023] The code of “Plug-and-Play Regulators for Image-Text Matching”☆33Updated last year
- [CVPR 2025] Augmenting Multimodal LLMs with Self-Reflective Tokens for Knowledge-based Visual Question Answering☆44Updated last month
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆149Updated last year
- Learning Situation Hyper-Graphs for Video Question Answering☆22Updated last year
- Implementation of our paper, 'Unifying Two-Stream Encoders with Transformers for Cross-Modal Retrieval.'☆25Updated last year
- [ACM MM 2024] Improving Composed Image Retrieval via Contrastive Learning with Scaling Positives and Negatives☆36Updated 2 months ago
- visual question answering prompting recipes for large vision-language models☆27Updated 11 months ago
- The efficient tuning method for VLMs☆80Updated last year
- Official Repo for FoodieQA paper (EMNLP 2024)☆16Updated 2 months ago
- 【ICLR 2024, Spotlight】Sentence-level Prompts Benefit Composed Image Retrieval☆85Updated last year
- [ICCV2023] Tem-adapter: Adapting Image-Text Pretraining for Video Question Answer☆37Updated last year
- [CVPR 2024] Official Code for the Paper "Compositional Chain-of-Thought Prompting for Large Multimodal Models"☆135Updated last year
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆65Updated 2 months ago
- An Easy-to-use Hallucination Detection Framework for LLMs.☆62Updated last year
- Multimodal Instruction Tuning with Conditional Mixture of LoRA (ACL 2024)☆32Updated last year
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)☆74Updated 7 months ago
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆55Updated 10 months ago
- Official implementation for the MM'22 paper.☆13Updated 3 years ago
- Source code of our AAAI 2024 paper "Cross-Modal and Uni-Modal Soft-Label Alignment for Image-Text Retrieval"☆46Updated last year
- EMNLP2023 - InfoSeek: A New VQA Benchmark focus on Visual Info-Seeking Questions☆25Updated last year