shubhamagarwal92 / visdial_convLinks
This repository contains code used in our ACL'20 paper History for Visual Dialog: Do we really need it?
☆34Updated 2 years ago
Alternatives and similar repositories for visdial_conv
Users that are interested in visdial_conv are comparing it to the libraries listed below
Sorting:
- ☆18Updated last year
- ✨ Official PyTorch Implementation for EMNLP'19 Paper, "Dual Attention Networks for Visual Reference Resolution in Visual Dialog"☆45Updated 2 years ago
- Code for CVPR'19 "Recursive Visual Attention in Visual Dialog"☆64Updated 2 years ago
- Dataset and Source code for EMNLP 2019 paper "What You See is What You Get: Visual Pronoun Coreference Resolution in Dialogues"☆26Updated 3 years ago
- ☆44Updated last week
- ☆15Updated 4 years ago
- Code for WACV 2021 Paper "Meta Module Network for Compositional Visual Reasoning"☆43Updated 4 years ago
- Code for the paper BiST: Bi-directional Spatio-Temporal Reasoning for Video-Grounded Dialogues (EMNLP20)☆11Updated last week
- Implementation for CVPR 2020 Paper "Two Causal Principles for Improving Visual Dialog"☆32Updated 2 years ago
- Counterfactual Samples Synthesizing for Robust VQA☆78Updated 2 years ago
- BottomUpTopDown VQA model with question-type debiasing☆22Updated 5 years ago
- Code for ViLBERTScore in EMNLP Eval4NLP☆18Updated 2 years ago
- Code for ACL 2020 paper "Dense-Caption Matching and Frame-Selection Gating for Temporal Localization in VideoQA." Hyounghun Kim, Zineng T…☆34Updated 5 years ago
- PyTorch implementation of "Debiased Visual Question Answering from Feature and Sample Perspectives" (NeurIPS 2021)☆25Updated 2 years ago
- A collections of papers about VQA-CP datasets and their results☆38Updated 3 years ago
- ☆13Updated 3 years ago
- CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training☆34Updated 3 years ago
- [EMNLP 2020] What is More Likely to Happen Next? Video-and-Language Future Event Prediction☆49Updated 2 years ago
- A video retrieval dataset How2R and a video QA dataset How2QA☆24Updated 4 years ago
- An image-oriented evaluation tool for image captioning systems (EMNLP-IJCNLP 2019)☆38Updated 5 years ago
- Implementation for the paper "Unified Multimodal Model with Unlikelihood Training for Visual Dialog"☆13Updated 2 years ago
- Shows visual grounding methods can be right for the wrong reasons! (ACL 2020)☆23Updated 4 years ago
- [CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias☆121Updated 3 years ago
- Implementation for "Large-scale Pretraining for Visual Dialog" https://arxiv.org/abs/1912.02379☆97Updated 5 years ago
- Code for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''☆41Updated 5 years ago
- The source code of ACL 2020 paper: "Cross-Modality Relevance for Reasoning on Language and Vision"☆27Updated 4 years ago
- [TACL 2021] Code and data for the framework in "Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-La…☆114Updated 3 years ago
- Video-aided Unsupervised Grammar Induction, NAACL‘21 [best long paper]☆40Updated 2 years ago
- Code for the CoNLL 2019 paper "Compositional Generalization in Image Captioning" by Mitja Nikolaus, Mostafa Abdou, Matthew Lamm, Rahul Ar…☆26Updated 5 years ago
- ☆16Updated 2 years ago