volkancirik / groundnetLinks
Repository for AAAI 2018 paper "Using Syntax for Referring Expression Recognition"
☆13Updated 5 years ago
Alternatives and similar repositories for groundnet
Users that are interested in groundnet are comparing it to the libraries listed below
Sorting:
- Implementation for "Large-scale Pretraining for Visual Dialog" https://arxiv.org/abs/1912.02379☆97Updated 5 years ago
- Code for CVPR'19 "Recursive Visual Attention in Visual Dialog"☆64Updated 2 years ago
- Scene Graph Parsing as Dependency Parsing☆41Updated 6 years ago
- Code for ACL 2020 paper "Dense-Caption Matching and Frame-Selection Gating for Temporal Localization in VideoQA." Hyounghun Kim, Zineng T…☆34Updated 5 years ago
- ✨ Official PyTorch Implementation for EMNLP'19 Paper, "Dual Attention Networks for Visual Reference Resolution in Visual Dialog"☆45Updated 2 years ago
- Visual Coreference Resolution in Visual Dialog using Neural Module Networks☆57Updated 4 years ago
- Code for the paper Multimodal Transformer Networks for End-to-End Video-Grounded Dialogue Systems (ACL19)☆101Updated 3 years ago
- This repository contains code used in our ACL'20 paper History for Visual Dialog: Do we really need it?☆34Updated 2 years ago
- Code for the CoNLL 2019 paper "Compositional Generalization in Image Captioning" by Mitja Nikolaus, Mostafa Abdou, Matthew Lamm, Rahul Ar…☆26Updated 5 years ago
- Measure the diversity of image descriptions, repository for our COLING 2018 paper.☆13Updated 6 years ago
- vist story telling evaluation tool☆21Updated 2 years ago
- Pre-trained V+L Data Preparation☆46Updated 5 years ago
- ☆53Updated 6 years ago
- Shows visual grounding methods can be right for the wrong reasons! (ACL 2020)☆23Updated 5 years ago
- Information Maximizing Visual Question Generation☆67Updated 2 years ago
- DSTC8-AVSD: Sentence generation task for Audio Visual Scene-aware Dialog☆14Updated 4 years ago
- A video retrieval dataset How2R and a video QA dataset How2QA☆24Updated 5 years ago
- Dataset and Source code for EMNLP 2019 paper "What You See is What You Get: Visual Pronoun Coreference Resolution in Dialogues"☆26Updated 4 years ago
- ☆27Updated 5 years ago
- Dataset for Bilingual VLN☆11Updated 5 years ago
- Support, annotation, evaluation, and baseline models for the imSitu dataset.☆60Updated 5 years ago
- BottomUpTopDown VQA model with question-type debiasing☆22Updated 6 years ago
- Implementation of "Watch, Listen, and Describe: Globally and Locally Aligned Cross-Modal Attentions for Video Captioning" (https://arxiv.…☆26Updated 7 years ago
- Code for the ACL paper "No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling"☆137Updated 4 years ago
- Torch Implementation of Speaker-Listener-Reinforcer for Referring Expression Generation and Comprehension☆34Updated 7 years ago
- Semantic Propositional Image Caption Evaluation☆145Updated 2 years ago
- [ACL 2019] Visually Grounded Neural Syntax Acquisition☆90Updated last year
- ☆22Updated 7 years ago
- Code release for Learning to Assemble Neural Module Tree Networks for Visual Grounding (ICCV 2019)☆39Updated 6 years ago
- Code for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''☆41Updated 6 years ago