sususushi / reconstruction-network-for-video-captioningLinks
☆16Updated 6 years ago
Alternatives and similar repositories for reconstruction-network-for-video-captioning
Users that are interested in reconstruction-network-for-video-captioning are comparing it to the libraries listed below
Sorting:
- Code and Models for paper "Reinforced Video Captioning with Entailment Rewards (EMNLP 2017)"☆44Updated 5 years ago
- PyTorch Implementation of Consensus-based Sequence Training for Video Captioning☆60Updated 7 years ago
- ☆33Updated 7 years ago
- ICCV2019: Controllable Video Captioning with POS Sequence Guidance Based on Gated Fusion Network☆68Updated 5 years ago
- Video Captioning on MSR-VTT and MSVD dataset using Deep Learning☆21Updated 5 years ago
- [ACM MM 2017 & IEEE TMM 2020] This is the Theano code for the paper "Video Description with Spatial Temporal Attention"☆59Updated 4 years ago
- Extension of hLSTMat☆18Updated 4 years ago
- PyTorch implementation of video captioning☆13Updated 7 years ago
- some models for video caption implemented by pytorch. (S2VT)☆23Updated 7 years ago
- Source code for Semantics-Assisted Video Captioning Model Trained with Scheduled Sampling Strategy☆55Updated 4 years ago
- ☆20Updated 5 years ago
- This is the official repo for "MAN: Moment Alignment Network for Natural Language Moment Retrieval via Iterative Graph Adjustment"☆17Updated 6 years ago
- Study of frame rate effects on MSR-VTT dataset☆15Updated 7 years ago
- Source code for "Recurrent Fusion Network for Image Captioning".☆23Updated 6 years ago
- The source code of the paper: "To Find Where You Talk: Temporal Sentence Localization in Video with Attention Based Location Regression"☆30Updated 6 years ago
- Caffe implementation of paper: "Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering"☆29Updated 6 years ago
- Implementation for "Multilevel Language and Vision Integration for Text-to-Clip Retrieval"☆49Updated 6 years ago
- implement video caption based on openNMT☆36Updated 7 years ago
- Heterogeneous Memory Enhanced Multimodal Attention Model for VideoQA☆54Updated 3 years ago
- Joint Embedding with Multimodal Cues for Cross-Modal Video-Text Retrieval☆68Updated 5 years ago
- Code for the paper: Semantic Conditioned Dynamic Modulation for Temporal Sentence Grounding in Videos☆71Updated 3 years ago
- Code for learning to generate stylized image captions from unaligned text☆61Updated 3 years ago
- ☆38Updated 7 years ago
- Video to Language Challenge (MSR-VTT Challenge 2016)☆32Updated 7 years ago
- Implementation of "Watch, Listen, and Describe: Globally and Locally Aligned Cross-Modal Attentions for Video Captioning" (https://arxiv.…☆26Updated 6 years ago
- vqa drived by bottom-up and top-down attention and knowledge☆14Updated 6 years ago
- A Pytorch implementation of "describing videos by exploiting temporal structure", ICCV 2015☆48Updated 2 years ago
- ☆22Updated 7 years ago
- Stack-Captioning: Coarse-to-Fine Learning for Image Captioning☆62Updated 7 years ago
- Adversarial Inference for Multi-Sentence Video Descriptions (CVPR 2019)☆34Updated 6 years ago