lil-lab / nlvr
Cornell NLVR and NLVR2 are natural language grounding datasets. Each example shows a visual input and a sentence describing it, and is annotated with the truth-value of the sentence.
☆257Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for nlvr
- [CVPR 2017] Torch code for Visual Dialog☆228Updated 5 years ago
- ☆365Updated 3 years ago
- Pytorch implementation of winner from VQA Chllange Workshop in CVPR'17☆164Updated 5 years ago
- PyTorch code for EMNLP 2020 Paper "Vokenization: Improving Language Understanding with Visual Supervision"☆186Updated 3 years ago
- A python wrapper for the Visual Genome API☆357Updated last year
- Train embodied agents that can answer questions in environments☆298Updated last year
- GuessWhat?! Baselines☆73Updated 2 years ago
- Toolkit for Visual7W visual question answering dataset☆76Updated 5 years ago
- Code release for Hu et al. Learning to Reason: End-to-End Module Networks for Visual Question Answering. in ICCV, 2017☆271Updated 4 years ago
- Semantic Propositional Image Caption Evaluation☆137Updated last year
- This repository provides code for reproducing experiments of the paper Talk The Walk: Navigating New York City Through Grounded Dialogue …☆112Updated 3 years ago
- PyTorch code for Learning Cooperative Visual Dialog Agents using Deep Reinforcement Learning☆170Updated 6 years ago
- Cornell Touchdown natural language navigation and spatial reasoning dataset.☆95Updated 4 years ago
- Implementation for the paper "Compositional Attention Networks for Machine Reasoning" (Hudson and Manning, ICLR 2018)☆497Updated 3 years ago
- Dataset and starting code for visual entailment dataset☆108Updated 2 years ago
- Neural Module Network for VQA in Pytorch☆108Updated 6 years ago
- Starter code in PyTorch for the Visual Dialog challenge☆192Updated last year
- Visual Coreference Resolution in Visual Dialog using Neural Module Networks☆57Updated 3 years ago
- ☆349Updated 6 years ago
- Visual Q&A reading list☆435Updated 6 years ago
- [TACL 2021] Code and data for the framework in "Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-La…☆114Updated 2 years ago
- Implementation for "Large-scale Pretraining for Visual Dialog" https://arxiv.org/abs/1912.02379☆95Updated 4 years ago
- [EMNLP 2018] PyTorch code for TVQA: Localized, Compositional Video Question Answering☆172Updated 2 years ago
- Information Maximizing Visual Question Generation☆66Updated last year
- Conceptual Captions is a dataset containing (image-URL, caption) pairs designed for the training and evaluation of machine learned image …☆520Updated 3 years ago
- MERLOT: Multimodal Neural Script Knowledge Models☆223Updated 2 years ago
- python codes for CIDEr - Consensus-based Image Caption Evaluation☆92Updated 7 years ago
- Visual Question Answering Project with state of the art single Model performance.☆132Updated 6 years ago
- Porting of Skip-Thoughts pretrained models from Theano to PyTorch & Torch7☆148Updated 5 years ago
- This repository contains the NarrativeQA dataset. It includes the list of documents with Wikipedia summaries, links to full stories, and …☆459Updated 4 years ago