HAWLYQ / Qc-TextCap
☆16Updated 3 years ago
Alternatives and similar repositories for Qc-TextCap:
Users that are interested in Qc-TextCap are comparing it to the libraries listed below
- Implementation of LaTr: Layout-aware transformer for scene-text VQA,a novel multimodal architecture for Scene Text Visual Question Answer…☆53Updated 6 months ago
- Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps[AAAI2021]☆57Updated 3 years ago
- Code for our ACL2021 paper: "Check It Again: Progressive Visual Question Answering via Visual Entailment"☆31Updated 3 years ago
- Controllable mage captioning model with unsupervised modes☆21Updated 2 years ago
- ☆28Updated 2 years ago
- Official code for paper "Spatially Aware Multimodal Transformers for TextVQA" published at ECCV, 2020.☆64Updated 3 years ago
- Source code and data used in the papers ViQuAE (Lerner et al., SIGIR'22), Multimodal ICT (Lerner et al., ECIR'23) and Cross-modal Retriev…☆36Updated 4 months ago
- TAP: Text-Aware Pre-training for Text-VQA and Text-Caption, CVPR 2021 (Oral)☆72Updated last year
- The code of IJCAI2022 paper, Declaration-based Prompt Tuning for Visual Question Answering☆19Updated 2 years ago
- The imdb files with SBD-Trans OCR for TextVQA dataset.☆11Updated 3 years ago
- PyTorch implementation of "Debiased Visual Question Answering from Feature and Sample Perspectives" (NeurIPS 2021)☆25Updated 2 years ago
- Cross-View Language Modeling: Towards Unified Cross-Lingual Cross-Modal Pre-training (ACL 2023))☆90Updated last year
- UniTAB: Unifying Text and Box Outputs for Grounded VL Modeling, ECCV 2022 (Oral Presentation)☆86Updated last year
- ☆37Updated last year
- natual language guided image captioning☆82Updated last year
- The source code of ACL 2020 paper: "Cross-Modality Relevance for Reasoning on Language and Vision"☆27Updated 4 years ago
- Weakly Supervised Grounding for VQA in Vision-Language Transformers☆16Updated 2 years ago
- Used in M4C feature extraction script: https://github.com/facebookresearch/mmf/blob/project/m4c/projects/M4C/scripts/extract_ocr_frcn_fea…☆13Updated 5 years ago
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆34Updated 3 years ago
- This repository contains code used in our ACL'20 paper History for Visual Dialog: Do we really need it?☆34Updated 2 years ago
- ☆26Updated 3 years ago
- Implementation for the paper "Unified Multimodal Model with Unlikelihood Training for Visual Dialog"☆13Updated last year
- [CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias☆121Updated 3 years ago
- A reading list of papers about Visual Question Answering.☆32Updated 2 years ago
- ☆12Updated 2 years ago
- A modular framework for Visual Question Answering research by the FAIR A-STAR team☆45Updated 3 years ago
- CVPR 2022 (Oral) Pytorch Code for Unsupervised Vision-and-Language Pre-training via Retrieval-based Multi-Granular Alignment☆22Updated 3 years ago
- Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”☆48Updated 2 years ago
- Code for WACV 2023 paper "VLC-BERT: Visual Question Answering with Contextualized Commonsense Knowledge"☆21Updated 2 years ago
- Microsoft COCO Caption Evaluation Tool - Python 3☆33Updated 5 years ago