makarandtapaswi / MovieQA_benchmarkLinks
Benchmark data and code for Question-Answering on Movie stories
☆44Updated 5 years ago
Alternatives and similar repositories for MovieQA_benchmark
Users that are interested in MovieQA_benchmark are comparing it to the libraries listed below
Sorting:
- GuessWhat?! Baselines☆74Updated 3 years ago
- [EMNLP 2018] PyTorch code for TVQA: Localized, Compositional Video Question Answering☆179Updated 2 years ago
- Pre-trained V+L Data Preparation☆46Updated 5 years ago
- Repository to generate CLEVR-Dialog: A diagnostic dataset for Visual Dialog☆49Updated 5 years ago
- Code release for Hu et al. Modeling Relationships in Referential Expressions with Compositional Modular Networks. in CVPR, 2017☆67Updated 7 years ago
- Code release for Hu et al., Explainable Neural Computation via Stack Neural Module Networks. in ECCV, 2018☆71Updated 5 years ago
- Localize objects in images using referring expressions☆37Updated 8 years ago
- Data of ACL 2019 Paper "Expressing Visual Relationships via Language".☆62Updated 4 years ago
- Animated GIF Description Dataset☆117Updated last year
- VQS: Linking Segmentations to Questions and Answers for Supervised Attention in VQA and Question-Focused Semantic Segmentation☆23Updated 8 years ago
- ☆54Updated 5 years ago
- Repository for our CVPR 2017 and IJCV: TGIF-QA☆176Updated 4 years ago
- Code for Learning to Learn Language from Narrated Video☆33Updated last year
- Torch Implementation of Speaker-Listener-Reinforcer for Referring Expression Generation and Comprehension☆34Updated 7 years ago
- Memory, Attention and Composition (MAC) Network for CLEVR implemented in PyTorch☆85Updated 6 years ago
- Adds SPICE metric to coco-caption evaluation server codes☆50Updated 2 years ago
- Implementation of Diverse and Accurate Image Description Using a Variational Auto-Encoder with an Additive Gaussian Encoding Space☆59Updated 7 years ago
- Code for the Globetrotter project☆23Updated 3 years ago
- Semantic Propositional Image Caption Evaluation☆143Updated 2 years ago
- Dense captioning with joint inference and visual context☆54Updated 6 years ago
- A simple but well-performing "single-hop" visual attention model for the GQA dataset☆20Updated 6 years ago
- Visaul Question Generation as Dual Task of Visual Question Answering (PyTorch Version)☆82Updated 7 years ago
- Code for ''A Simple Baseline for Audio-Visual Scene-Aware Dialog``☆26Updated 5 years ago
- Mixture-of-Embeddings-Experts☆120Updated 5 years ago
- Generate a denotation graph from a set of image captions☆15Updated 7 years ago
- Torch implementation for Stacked Attention Networks☆23Updated 8 years ago
- visual dialog model in pytorch☆109Updated 7 years ago
- Implementation for our paper "Conditional Image-Text Embedding Networks"☆39Updated 5 years ago
- Data and code for CVPR 2020 paper: "VIOLIN: A Large-Scale Dataset for Video-and-Language Inference"☆162Updated 5 years ago
- Sentence/Caption evaluation using automated metrics☆61Updated 9 years ago