YunseokJANG / tgif-qaLinks
Repository for our CVPR 2017 and IJCV: TGIF-QA
☆175Updated 3 years ago
Alternatives and similar repositories for tgif-qa
Users that are interested in tgif-qa are comparing it to the libraries listed below
Sorting:
- Evaluation code for Dense-Captioning Events in Videos☆128Updated 6 years ago
- Github for my ICCV 2017 paper: "Localizing Moments in Video with Natural Language"☆197Updated 4 years ago
- Semantic Propositional Image Caption Evaluation☆141Updated 2 years ago
- The Theano code for the CVPR 2017 paper "Semantic Compositional Networks for Visual Captioning"☆68Updated 7 years ago
- [EMNLP 2018] PyTorch code for TVQA: Localized, Compositional Video Question Answering☆178Updated 2 years ago
- Video to Language Challenge (MSR-VTT Challenge 2016)☆31Updated 7 years ago
- [ACL 2020] PyTorch code for TVQA+: Spatio-Temporal Grounding for Video Question Answering☆129Updated 2 years ago
- Weakly Supervised Dense Event Captioning in Videos, i.e. generating multiple sentence descriptions for a video in a weakly-supervised man…☆104Updated 5 years ago
- ☆190Updated 3 weeks ago
- Supplementary material to "Top-down Visual Saliency Guided by Captions" (CVPR 2017)☆107Updated 7 years ago
- Source code for the paper "Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training"☆66Updated 6 years ago
- Video Question Answering via Gradually Refined Attention over Appearance and Motion☆168Updated 7 years ago
- Code for Discriminability objective for training descriptive captions(CVPR 2018)☆109Updated 5 years ago
- [ICLR 2018] Learning to Count Objects in Natural Images for Visual Question Answering☆207Updated 6 years ago
- Moments Retrieval Project Webpage (temporal)☆31Updated last year
- PyTorch Implementation of Consensus-based Sequence Training for Video Captioning☆59Updated 7 years ago
- Mixture-of-Embeddings-Experts☆120Updated 4 years ago
- Code for learning to generate stylized image captions from unaligned text☆61Updated 2 years ago
- MUREL (CVPR 2019), a multimodal relational reasoning module for VQA☆195Updated 5 years ago
- Dense captioning with joint inference and visual context☆53Updated 6 years ago
- Code for our paper: Learning Conditioned Graph Structures for Interpretable Visual Question Answering☆149Updated 6 years ago
- Using Semantic Compositional Networks for Video Captioning☆96Updated 6 years ago
- Source code for paper "Towards Automatic Learning of Procedures from Web Instructional Videos"☆34Updated 6 years ago
- Code release for Hu et al. Modeling Relationships in Referential Expressions with Compositional Modular Networks. in CVPR, 2017☆67Updated 6 years ago
- Adds SPICE metric to coco-caption evaluation server codes☆50Updated 2 years ago
- Referring Expression Parser☆27Updated 7 years ago
- visual dialog model in pytorch☆109Updated 7 years ago
- Heterogeneous Memory Enhanced Multimodal Attention Model for VideoQA☆54Updated 3 years ago
- An VideoQA dataset based on the videos from ActivityNet☆85Updated 4 years ago
- Use transformer for captioning☆156Updated 6 years ago