Wangt-CN / VQG-GCNLinks
A GCN based visual question generation model
☆13Updated 6 years ago
Alternatives and similar repositories for VQG-GCN
Users that are interested in VQG-GCN are comparing it to the libraries listed below
Sorting:
- The source code of ACL 2020 paper: "Cross-Modality Relevance for Reasoning on Language and Vision"☆27Updated 4 years ago
- ☆77Updated 3 years ago
- Position Focused Attention Network for Image-Text Matching☆69Updated 6 years ago
- Implementation of paper "Improving Image Captioning with Better Use of Caption"☆33Updated 5 years ago
- Code for our IJCAI2020 paper: Overcoming Language Priors with Self-supervised Learning for Visual Question Answering☆52Updated 5 years ago
- The offical code for paper "Matching Images and Text with Multi-modal Tensor Fusion and Re-ranking", ACM Multimedia 2019 Oral☆68Updated 6 years ago
- code for paper `MemCap: Memorizing Style Knowledge for Image Captioning`☆11Updated 5 years ago
- Bridging by Word: Image-Grounded Vocabulary Construction for Visual Captioning based in ACL2019☆17Updated 6 years ago
- Code for "Aligning Visual Regions and Textual Concepts for Semantic-Grounded Image Representations" (NeurIPS 2019)☆65Updated 5 years ago
- Show, Edit and Tell: A Framework for Editing Image Captions, CVPR 2020☆81Updated 5 years ago
- Code for journal paper "Learning Dual Semantic Relations with Graph Attention for Image-Text Matching", TCSVT, 2020.☆73Updated 3 years ago
- Code for paper "Adaptively Aligned Image Captioning via Adaptive Attention Time". NeurIPS 2019☆51Updated 5 years ago
- CAMP: Cross-Modal Adaptive Message Passing for Text-Image Retrieval☆127Updated 5 years ago
- Implementation of our CVPR2020 paper, Graph Structured Network for Image-Text Matching☆169Updated 5 years ago
- Learning Fragment Self-Attention Embeddings for Image-Text Matching, in ACM MM 2019☆41Updated 6 years ago
- Official code and dataset link for ''VMSMO: Learning to Generate Multimodal Summary for Video-based News Articles''☆36Updated 4 years ago
- reproduce the results of Adversarial Cross-Modal retrieval (ACMR)☆23Updated 5 years ago
- A PyTorch implementation of the paper Multimodal Transformer with Multiview Visual Representation for Image Captioning☆25Updated 5 years ago
- Re-implementation for 'R-VQA: Learning Visual Relation Facts with Semantic Attention for Visual Question Answering'.☆12Updated 6 years ago
- Adversarial Inference for Multi-Sentence Video Descriptions (CVPR 2019)☆34Updated 6 years ago
- Hierarchical Question-Image Co-Attention for Visual Question Answering☆24Updated 6 years ago
- Compact Trilinear Interaction for Visual Question Answering (ICCV 2019)☆38Updated 3 years ago
- code for our CVPR2020 paper "IMRAM: Iterative Matching with Recurrent Attention Memory for Cross-Modal Image-Text Retrieval"☆96Updated 5 years ago
- Research Code for ICCV 2019 paper "Relation-aware Graph Attention Network for Visual Question Answering"☆187Updated 4 years ago
- A Pytorch implementation of CVPR 2020 paper: Multi-Modal Graph Neural Network for Joint Reasoning on Vision and Scene Text☆51Updated 2 years ago
- ROCK model for Knowledge-Based VQA in Videos☆31Updated 5 years ago
- Reading list for multimodal sequence learning☆14Updated 2 years ago
- Research Code for NeurIPS 2020 Spotlight paper "Large-Scale Adversarial Training for Vision-and-Language Representation Learning": LXMERT…☆21Updated 5 years ago
- Code and Resources for the Transformer Encoder Reasoning and Alignment Network (TERAN), accepted for publication in ACM Transactions on M…☆74Updated last year
- ☆15Updated 5 years ago