pochih / Video-Cap
π¬ Video Captioning: ICCV '15 paper implementation
β47Updated 6 years ago
Alternatives and similar repositories for Video-Cap:
Users that are interested in Video-Cap are comparing it to the libraries listed below
- Word2VisualVec : Predicting Visual Features from Text for Image and Video Caption Retrievalβ69Updated 5 years ago
- Evaluation code for Dense-Captioning Events in Videosβ123Updated 5 years ago
- Using Semantic Compositional Networks for Video Captioningβ95Updated 6 years ago
- This repository contains the code for a video captioning system inspired by Sequence to Sequence -- Video to Text. This system takes as iβ¦β165Updated 5 years ago
- Official Tensorflow Implementation of the paper "Bidirectional Attentive Fusion with Context Gating for Dense Video Captioning" in CVPR 2β¦β149Updated 5 years ago
- implement video caption based on openNMTβ36Updated 6 years ago
- Code and Models for paper "Reinforced Video Captioning with Entailment Rewards (EMNLP 2017)"β43Updated 5 years ago
- A video captioning tool using S2VT method and attention mechanism (TensorFlow)β15Updated 6 years ago
- Re-implement CVPR2017 paper: "dense captioning with joint inference and visual context" and minor changes in Tensorflow. (mAP 8.296 afterβ¦β61Updated 6 years ago
- Soft attention mechanism for video caption generationβ156Updated 7 years ago
- Pytorch Implementation of Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioningβ107Updated 7 years ago
- [ACM MM 2017 & IEEE TMM 2020] This is the Theano code for the paper "Video Description with Spatial Temporal Attention"β57Updated 4 years ago
- PyTorch Implementation of Consensus-based Sequence Training for Video Captioningβ59Updated 6 years ago
- β33Updated 6 years ago
- Source code for the paper "Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training"β65Updated 5 years ago
- Tensorflow implement of paper: Sequence to Sequence: Video to Textβ88Updated 6 years ago
- Caffe implementation of paper: "Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering"β29Updated 6 years ago
- some models for video caption implemented by pytorch. (S2VT)β23Updated 7 years ago
- Dense video captioning in PyTorchβ41Updated 5 years ago
- Code and demos for our paper at ACM MM 2017β63Updated 5 years ago
- Joint Embedding with Multimodal Cues for Cross-Modal Video-Text Retrievalβ68Updated 4 years ago
- CVPR 2018 - Regularizing RNNs for Caption Generation by Reconstructing The Past with The Presentβ98Updated 6 years ago
- Adversarial Inference for Multi-Sentence Video Descriptions (CVPR 2019)β34Updated 5 years ago
- β129Updated 6 years ago
- Image Captioning based on Bottom-Up and Top-Down Attention modelβ102Updated 6 years ago
- Implementation of "Watch, Listen, and Describe: Globally and Locally Aligned Cross-Modal Attentions for Video Captioning" (https://arxiv.β¦β26Updated 6 years ago
- β189Updated 3 years ago
- Supplementary material to "Top-down Visual Saliency Guided by Captions" (CVPR 2017)β107Updated 7 years ago
- Starter code in PyTorch for the Visual Dialog challengeβ192Updated last year
- Implementation for "Multilevel Language and Vision Integration for Text-to-Clip Retrieval"β50Updated 6 years ago