RachanaJayaram / Cross-Attention-VizWiz-VQA
A self-evident application of the VQA task is to design systems that aid blind people with sight reliant queries. The VizWiz VQA dataset originates from images and questions compiled by members of the visually impaired community and as such, highlights some of the challenges presented by this particular use case.
☆15Updated last year
Alternatives and similar repositories for Cross-Attention-VizWiz-VQA:
Users that are interested in Cross-Attention-VizWiz-VQA are comparing it to the libraries listed below
- ☆38Updated last year
- ☆66Updated 2 years ago
- Microsoft COCO Caption Evaluation Tool - Python 3☆33Updated 5 years ago
- Code of Dense Relational Captioning☆68Updated last year
- Implementation for MAF: Multimodal Alignment Framework☆43Updated 4 years ago
- Implementation of paper "Improving Image Captioning with Better Use of Caption"☆32Updated 4 years ago
- The source code of ACL 2020 paper: "Cross-Modality Relevance for Reasoning on Language and Vision"☆26Updated 3 years ago
- Unpaired Image Captioning☆35Updated 3 years ago
- ROCK model for Knowledge-Based VQA in Videos☆30Updated 4 years ago
- Code for "Aligning Visual Regions and Textual Concepts for Semantic-Grounded Image Representations" (NeurIPS 2019)☆65Updated 4 years ago
- A reading list of papers about Visual Question Answering.☆32Updated 2 years ago
- An image-oriented evaluation tool for image captioning systems (EMNLP-IJCNLP 2019)☆36Updated 4 years ago
- Human-like Controllable Image Captioning with Verb-specific Semantic Roles.☆36Updated 2 years ago
- Controllable mage captioning model with unsupervised modes☆21Updated last year
- Compact Trilinear Interaction for Visual Question Answering (ICCV 2019)☆38Updated 2 years ago
- Code for paper "Adaptively Aligned Image Captioning via Adaptive Attention Time". NeurIPS 2019☆49Updated 5 years ago
- A pytorch implementation of "Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering" for image captioning.☆47Updated 3 years ago
- A length-controllable and non-autoregressive image captioning model.☆68Updated 3 years ago
- ☆44Updated 2 years ago
- Adversarial Inference for Multi-Sentence Video Descriptions (CVPR 2019)☆34Updated 5 years ago
- code for paper `MemCap: Memorizing Style Knowledge for Image Captioning`☆11Updated 4 years ago
- 🥉 Codalab-Microsoft-COCO-Image-Captioning-Challenge 3rd place solution(06.30.21)☆23Updated 2 years ago
- A PyTorch implementation of the paper Multimodal Transformer with Multiview Visual Representation for Image Captioning☆24Updated 4 years ago
- ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration☆56Updated last year
- Show, Edit and Tell: A Framework for Editing Image Captions, CVPR 2020☆81Updated 4 years ago
- Code for ACL 2020 paper "Dense-Caption Matching and Frame-Selection Gating for Temporal Localization in VideoQA." Hyounghun Kim, Zineng T…☆34Updated 4 years ago
- Position Focused Attention Network for Image-Text Matching☆68Updated 5 years ago
- Code for ViLBERTScore in EMNLP Eval4NLP☆18Updated 2 years ago
- An updated PyTorch implementation of hengyuan-hu's version for 'Bottom-Up and Top-Down Attention for Image Captioning and Visual Question…☆36Updated 2 years ago
- MuKEA: Multimodal Knowledge Extraction and Accumulation for Knowledge-based Visual Question Answering☆92Updated last year