nocaps-org / image-feature-extractorsLinks
Feature extraction and visualization scripts for nocaps baselines.
☆18Updated 4 years ago
Alternatives and similar repositories for image-feature-extractors
Users that are interested in image-feature-extractors are comparing it to the libraries listed below
Sorting:
- ☆54Updated 5 years ago
- Baseline model for nocaps benchmark, ICCV 2019 paper "nocaps: novel object captioning at scale".☆76Updated 2 years ago
- Pre-trained V+L Data Preparation☆46Updated 5 years ago
- Code for CVPR'19 "Recursive Visual Attention in Visual Dialog"☆64Updated 2 years ago
- [EMNLP 2018] PyTorch code for TVQA: Localized, Compositional Video Question Answering☆179Updated 3 years ago
- GuessWhat?! Baselines☆74Updated 3 years ago
- Scene Graph Parsing as Dependency Parsing☆41Updated 6 years ago
- Official implementation of ICCV19 oral paper Zero-Shot grounding of Objects from Natural Language Queries (https://arxiv.org/abs/1908.071…☆71Updated 5 years ago
- Torch Implementation of Speaker-Listener-Reinforcer for Referring Expression Generation and Comprehension☆34Updated 7 years ago
- Data of ACL 2019 Paper "Expressing Visual Relationships via Language".☆62Updated 5 years ago
- Code release for Hu et al., Explainable Neural Computation via Stack Neural Module Networks. in ECCV, 2018☆71Updated 5 years ago
- Dense video captioning in PyTorch☆41Updated 6 years ago
- Referring Expression Parser☆27Updated 7 years ago
- PyTorch code for Reasoning Visual Dialogs with Structural and Partial Observations☆42Updated 4 years ago
- Code release for Hu et al., Language-Conditioned Graph Networks for Relational Reasoning. in ICCV, 2019☆92Updated 6 years ago
- Code for CVPR'18 "Grounding Referring Expressions in Images by Variational Context"☆30Updated 7 years ago
- Project page for "Visual Grounding in Video for Unsupervised Word Translation" CVPR 2020☆42Updated 5 years ago
- Rethinking Diversified and Discriminative Proposal Generation for Visual Grounding☆23Updated 7 years ago
- Multi-sense word embeddings from visual co-occurrences☆25Updated 6 years ago
- Data and code for CVPR 2020 paper: "VIOLIN: A Large-Scale Dataset for Video-and-Language Inference"☆162Updated 5 years ago
- PyTorch code for: Learning to Generate Grounded Visual Captions without Localization Supervision☆46Updated 5 years ago
- [ACL 2019] Visually Grounded Neural Syntax Acquisition☆90Updated last year
- [ACL 2020] PyTorch code for TVQA+: Spatio-Temporal Grounding for Video Question Answering☆129Updated 3 years ago
- Visual Question Reasoning on General Dependency Tree☆30Updated 7 years ago
- Code for Discriminability objective for training descriptive captions(CVPR 2018)☆109Updated 5 years ago
- Repository to generate CLEVR-Dialog: A diagnostic dataset for Visual Dialog☆49Updated 5 years ago
- Starter code for the VMT task and challenge☆51Updated 5 years ago
- VQS: Linking Segmentations to Questions and Answers for Supervised Attention in VQA and Question-Focused Semantic Segmentation☆23Updated 8 years ago
- A simple but well-performing "single-hop" visual attention model for the GQA dataset☆20Updated 6 years ago
- Use transformer for captioning☆156Updated 6 years ago