nocaps-org / updown-baselineLinks
Baseline model for nocaps benchmark, ICCV 2019 paper "nocaps: novel object captioning at scale".
☆76Updated last year
Alternatives and similar repositories for updown-baseline
Users that are interested in updown-baseline are comparing it to the libraries listed below
Sorting:
- ☆54Updated 5 years ago
- Learning to Evaluate Image Captioning. CVPR 2018☆83Updated 7 years ago
- Code for CVPR'19 "Recursive Visual Attention in Visual Dialog"☆64Updated 2 years ago
- Pre-trained V+L Data Preparation☆46Updated 5 years ago
- Code for Discriminability objective for training descriptive captions(CVPR 2018)☆109Updated 5 years ago
- [ICLR 2018] Learning to Count Objects in Natural Images for Visual Question Answering☆206Updated 6 years ago
- Implementation for the AAAI2019 paper "Large-scale Visual Relationship Understanding"☆145Updated 5 years ago
- python codes for CIDEr - Consensus-based Image Caption Evaluation☆32Updated 6 years ago
- Code release for Hu et al., Language-Conditioned Graph Networks for Relational Reasoning. in ICCV, 2019☆92Updated 5 years ago
- Use transformer for captioning☆156Updated 6 years ago
- PyTorch library for Visual-Semantic tasks☆29Updated 2 years ago
- Source code for the paper "Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training"☆66Updated 6 years ago
- [EMNLP 2018] PyTorch code for TVQA: Localized, Compositional Video Question Answering☆179Updated 2 years ago
- Implementation of Diverse and Accurate Image Description Using a Variational Auto-Encoder with an Additive Gaussian Encoding Space☆58Updated 7 years ago
- MUREL (CVPR 2019), a multimodal relational reasoning module for VQA☆195Updated 5 years ago
- [EMNLP 2018] Training for Diversity in Image Paragraph Captioning☆89Updated 5 years ago
- Pytorch implementation of "Explainable and Explicit Visual Reasoning over Scene Graphs "☆92Updated 6 years ago
- Show, Edit and Tell: A Framework for Editing Image Captions, CVPR 2020☆80Updated 5 years ago
- PyTorch code for: Learning to Generate Grounded Visual Captions without Localization Supervision☆44Updated 5 years ago
- Torch Implementation of Speaker-Listener-Reinforcer for Referring Expression Generation and Comprehension☆34Updated 7 years ago
- Code for Unsupervised Image Captioning☆218Updated 2 years ago
- Code for our paper: Learning Conditioned Graph Structures for Interpretable Visual Question Answering☆149Updated 6 years ago
- Data of ACL 2019 Paper "Expressing Visual Relationships via Language".☆62Updated 4 years ago
- Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question Answering☆106Updated 5 years ago
- Semantic Propositional Image Caption Evaluation☆141Updated 2 years ago
- A Dataset for Grounded Video Description☆162Updated 3 years ago
- Feature extraction and visualization scripts for nocaps baselines.☆18Updated 4 years ago
- Stack-Captioning: Coarse-to-Fine Learning for Image Captioning☆62Updated 7 years ago
- Evaluation code for Dense-Captioning Events in Videos☆128Updated 6 years ago
- Code for CVPR'18 "Grounding Referring Expressions in Images by Variational Context"☆30Updated 7 years ago