lichengunc / refer-parser2
Referring Expression Parser
☆27Updated 7 years ago
Alternatives and similar repositories for refer-parser2:
Users that are interested in refer-parser2 are comparing it to the libraries listed below
- Torch Implementation of Speaker-Listener-Reinforcer for Referring Expression Generation and Comprehension☆34Updated 7 years ago
- PyTorch code for Reasoning Visual Dialogs with Structural and Partial Observations☆42Updated 3 years ago
- Inferring and Executing Programs for Visual Reasoning☆21Updated 6 years ago
- Visual Question Reasoning on General Dependency Tree☆30Updated 6 years ago
- ☆63Updated 3 years ago
- Adaptive Reconstruction Network for Weakly Supervised Referring Expression Grounding☆34Updated 5 years ago
- Code release for Learning to Assemble Neural Module Tree Networks for Visual Grounding (ICCV 2019)☆39Updated 5 years ago
- An image-oriented evaluation tool for image captioning systems (EMNLP-IJCNLP 2019)☆38Updated 4 years ago
- Code for CVPR 19 Paper "Improving Referring Expression Grounding with Cross-modal Attention-guided Erasing"☆33Updated 5 years ago
- Official implementation of ICCV19 oral paper Zero-Shot grounding of Objects from Natural Language Queries (https://arxiv.org/abs/1908.071…☆71Updated 4 years ago
- A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning☆26Updated 3 years ago
- Implementation for our paper "Conditional Image-Text Embedding Networks"☆39Updated 5 years ago
- This repository provides the dataset introduced by our WSSTG paper☆12Updated 5 years ago
- Code release for Hu et al., Explainable Neural Computation via Stack Neural Module Networks. in ECCV, 2018☆71Updated 5 years ago
- Code release for Hu et al. Modeling Relationships in Referential Expressions with Compositional Modular Networks. in CVPR, 2017☆67Updated 6 years ago
- This is the repo for Multi-level textual grounding☆33Updated 4 years ago
- Code for "bootstrap, review, decode: using out-of-domain textual data to improve image captioning"☆20Updated 8 years ago
- Code for the Visual Question Answering (VQA) part of CVPR 2019 oral paper: "Learning to Compose Dynamic Tree Structures for Visual Contex…☆34Updated 6 years ago
- Unpaired Image Captioning☆35Updated 4 years ago
- VQS: Linking Segmentations to Questions and Answers for Supervised Attention in VQA and Question-Focused Semantic Segmentation☆22Updated 7 years ago
- A video retrieval dataset How2R and a video QA dataset How2QA☆24Updated 4 years ago
- [CVPR20] Video Object Grounding using Semantic Roles in Language Description (https://arxiv.org/abs/2003.10606)☆67Updated 4 years ago
- Contrastive Learning for Image Captioning☆50Updated 7 years ago
- Code for CVPR'19 "Recursive Visual Attention in Visual Dialog"☆64Updated 2 years ago
- Pre-trained V+L Data Preparation☆46Updated 4 years ago
- PyTorch code for: Learning to Generate Grounded Visual Captions without Localization Supervision☆44Updated 4 years ago
- Scene Graph Parsing as Dependency Parsing☆41Updated 5 years ago
- Pytorch implementation of "Explainable and Explicit Visual Reasoning over Scene Graphs "☆94Updated 6 years ago
- A paper list of visual semantic embeddings and text-image retrieval.☆41Updated 4 years ago
- Referring Expression Object Segmentation with Caption-Aware Consistency, BMVC 2019☆31Updated 3 years ago