HAWLYQ / Qc-TextCapLinks
☆16Updated 3 years ago
Alternatives and similar repositories for Qc-TextCap
Users that are interested in Qc-TextCap are comparing it to the libraries listed below
Sorting:
- Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps[AAAI2021]☆57Updated 3 years ago
- VisualMRC: Machine Reading Comprehension on Document Images (AAAI2021)☆56Updated 5 months ago
- Code for our ACL2021 paper: "Check It Again: Progressive Visual Question Answering via Visual Entailment"☆31Updated 3 years ago
- natual language guided image captioning☆85Updated last year
- ☆188Updated last year
- MuKEA: Multimodal Knowledge Extraction and Accumulation for Knowledge-based Visual Question Answering☆98Updated 2 years ago
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆34Updated 3 years ago
- Controllable mage captioning model with unsupervised modes☆21Updated 2 years ago
- Source code and data used in the papers ViQuAE (Lerner et al., SIGIR'22), Multimodal ICT (Lerner et al., ECIR'23) and Cross-modal Retriev…☆38Updated 9 months ago
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆91Updated 2 years ago
- Implementation of LaTr: Layout-aware transformer for scene-text VQA,a novel multimodal architecture for Scene Text Visual Question Answer…☆54Updated 10 months ago
- Code for WACV 2023 paper "VLC-BERT: Visual Question Answering with Contextualized Commonsense Knowledge"☆21Updated 2 years ago
- This repository contains code used in our ACL'20 paper History for Visual Dialog: Do we really need it?☆34Updated 2 years ago
- the implementation of EMNLP 2020 "Learning to Contrast the Counterfactual Samples for Robust Visual Question Answering"☆15Updated 4 years ago
- ☆25Updated 3 years ago
- ☆40Updated 2 years ago
- ☆106Updated 3 years ago
- ☆45Updated 3 months ago
- Implementation for MAF: Multimodal Alignment Framework☆46Updated 4 years ago
- TAP: Text-Aware Pre-training for Text-VQA and Text-Caption, CVPR 2021 (Oral)☆72Updated 2 years ago
- The code of IJCAI2022 paper, Declaration-based Prompt Tuning for Visual Question Answering☆20Updated 3 years ago
- [TACL 2021] Code and data for the framework in "Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-La…☆114Updated 3 years ago
- [CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias☆125Updated 3 years ago
- CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations☆29Updated last year
- PyTorch implementation of "Debiased Visual Question Answering from Feature and Sample Perspectives" (NeurIPS 2021)☆25Updated 2 years ago
- CVPR 2022 (Oral) Pytorch Code for Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment☆22Updated 3 years ago
- Official code for paper "Spatially Aware Multimodal Transformers for TextVQA" published at ECCV, 2020.☆64Updated 4 years ago
- An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA, AAAI 2022 (Oral)☆85Updated 3 years ago
- A curated list of Multimodal Captioning related research(including image captioning, video captioning, and text captioning)☆111Updated 3 years ago
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆115Updated 3 years ago