pochih / Video-CapLinks
π¬ Video Captioning: ICCV '15 paper implementation
β47Updated 7 years ago
Alternatives and similar repositories for Video-Cap
Users that are interested in Video-Cap are comparing it to the libraries listed below
Sorting:
- Word2VisualVec : Predicting Visual Features from Text for Image and Video Caption Retrievalβ69Updated 5 years ago
- Using Semantic Compositional Networks for Video Captioningβ96Updated 6 years ago
- Evaluation code for Dense-Captioning Events in Videosβ128Updated 6 years ago
- Official Tensorflow Implementation of the paper "Bidirectional Attentive Fusion with Context Gating for Dense Video Captioning" in CVPR 2β¦β150Updated 5 years ago
- Code and Models for paper "Reinforced Video Captioning with Entailment Rewards (EMNLP 2017)"β44Updated 5 years ago
- This repository contains the code for a video captioning system inspired by Sequence to Sequence -- Video to Text. This system takes as iβ¦β166Updated 5 years ago
- Source code for the paper "Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training"β66Updated 6 years ago
- Tensorflow implement of paper: Sequence to Sequence: Video to Textβ87Updated 6 years ago
- A video captioning tool using S2VT method and attention mechanism (TensorFlow)β15Updated 6 years ago
- PyTorch Implementation of Consensus-based Sequence Training for Video Captioningβ59Updated 7 years ago
- implement video caption based on openNMTβ36Updated 7 years ago
- Caffe implementation of paper: "Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering"β29Updated 6 years ago
- Source code for paper "Towards Automatic Learning of Procedures from Web Instructional Videos"β34Updated 6 years ago
- CVPR 2018 - Regularizing RNNs for Caption Generation by Reconstructing The Past with The Presentβ99Updated 6 years ago
- Adversarial Inference for Multi-Sentence Video Descriptions (CVPR 2019)β34Updated 5 years ago
- [EMNLP 2018] Training for Diversity in Image Paragraph Captioningβ89Updated 5 years ago
- Code for learning to generate stylized image captions from unaligned textβ61Updated 2 years ago
- Tensorflow implement of paper: A Hierarchical Approach for Generating Descriptive Image Paragraphsβ49Updated 6 years ago
- Supplementary material to "Top-down Visual Saliency Guided by Captions" (CVPR 2017)β107Updated 7 years ago
- [ACM MM 2017 & IEEE TMM 2020] This is the Theano code for the paper "Video Description with Spatial Temporal Attention"β57Updated 4 years ago
- some models for video caption implemented by pytorch. (S2VT)β23Updated 7 years ago
- Code and demos for our paper at ACM MM 2017β62Updated 6 years ago
- Code for Discriminability objective for training descriptive captions(CVPR 2018)β109Updated 5 years ago
- β50Updated 8 years ago
- Implementation of "Watch, Listen, and Describe: Globally and Locally Aligned Cross-Modal Attentions for Video Captioning" (https://arxiv.β¦β26Updated 6 years ago
- Soft attention mechanism for video caption generationβ156Updated 7 years ago
- Video to Language Challenge (MSR-VTT Challenge 2016)β31Updated 7 years ago
- Code for "Show, Adapt and Tell: Adversarial Training of Cross-domain Image Captioner" in ICCV 2017β148Updated 6 years ago
- Implementation for "Multilevel Language and Vision Integration for Text-to-Clip Retrieval"β50Updated 6 years ago
- Re-implement CVPR2017 paper: "dense captioning with joint inference and visual context" and minor changes in Tensorflow. (mAP 8.296 afterβ¦β61Updated 6 years ago