wanng-ide / VQA_to_multimodal_survey
Update 2020
☆75Updated 2 years ago
Alternatives and similar repositories for VQA_to_multimodal_survey:
Users that are interested in VQA_to_multimodal_survey are comparing it to the libraries listed below
- All-In-One VLM: Image + Video + Transfer to Other Languages / Domains (TPAMI 2023)☆151Updated 5 months ago
- Recent Advances in Vision and Language Pre-training (VLP)☆290Updated last year
- [Paper][AAAI2024]Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal Structured Representations☆127Updated 7 months ago
- SmallCap: Lightweight Image Captioning Prompted with Retrieval Augmentation☆100Updated 11 months ago
- MuKEA: Multimodal Knowledge Extraction and Accumulation for Knowledge-based Visual Question Answering☆92Updated last year
- Official implementation of "Open-Vocabulary Multi-Label Classification via Multi-Modal Knowledge Transfer".☆124Updated 2 months ago
- Implementation of CVPR 2023 paper "Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering".☆271Updated last year
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆195Updated 10 months ago
- [MIR-2023-Survey] A continuously updated paper list for multi-modal pre-trained big models☆284Updated 2 weeks ago
- SotA text-only image/video method (IJCAI 2023)☆15Updated last year
- METER: A Multimodal End-to-end TransformER Framework☆365Updated 2 years ago
- A lightweight codebase for referring expression comprehension and segmentation☆52Updated 2 years ago
- mPLUG-HalOwl: Multimodal Hallucination Evaluation and Mitigating☆88Updated last year
- MixGen: A New Multi-Modal Data Augmentation☆119Updated 2 years ago
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆89Updated last year
- Implementation of our CVPR2022 paper, Negative-Aware Attention Framework for Image-Text Matching.☆111Updated last year
- Summary of Related Research on Image-Text Matching☆69Updated last year
- CHAIR metric is a rule-based metric for evaluating object hallucination in caption generation.☆26Updated last year
- [ICLR 2023] This is the code repo for our ICLR‘23 paper "Universal Vision-Language Dense Retrieval: Learning A Unified Representation Spa…☆50Updated 6 months ago
- Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”☆48Updated 2 years ago
- natual language guided image captioning☆78Updated 11 months ago
- ☆84Updated 2 years ago
- ☆89Updated 2 years ago
- Dynamic Modality Interaction Modeling for Image-Text Retrieval. SIGIR'21☆67Updated 2 years ago
- Code for our EMNLP-2022 paper: "Language Prior Is Not the Only Shortcut: A Benchmark for Shortcut Learning in VQA"☆38Updated 2 years ago
- This is the first released survey paper on hallucinations of large vision-language models (LVLMs). To keep track of this field and contin…☆60Updated 6 months ago
- [CVPR 2024] LION: Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge☆128Updated 6 months ago
- Official PyTorch implementation of our CVPR 2022 paper: Beyond a Pre-Trained Object Detector: Cross-Modal Textual and Visual Context for …☆60Updated 2 years ago
- Code for our ACL2021 paper: "Check It Again: Progressive Visual Question Answering via Visual Entailment"☆31Updated 3 years ago
- Colorful Prompt Tuning for Pre-trained Vision-Language Models☆48Updated 2 years ago