tsenghungchen / show-adapt-and-tell
Code for "Show, Adapt and Tell: Adversarial Training of Cross-domain Image Captioner" in ICCV 2017
☆148Updated 6 years ago
Alternatives and similar repositories for show-adapt-and-tell:
Users that are interested in show-adapt-and-tell are comparing it to the libraries listed below
- Tensorflow implement of paper: Optimization of image description metrics using policy gradient methods☆29Updated 6 years ago
- Towards Diverse and Natural Image Descriptions via a Conditional GAN☆75Updated 7 years ago
- ☆50Updated 8 years ago
- Code for Discriminability objective for training descriptive captions(CVPR 2018)☆109Updated 5 years ago
- ☆129Updated 6 years ago
- CVPR 2018 - Regularizing RNNs for Caption Generation by Reconstructing The Past with The Present☆98Updated 6 years ago
- Soft attention mechanism for video caption generation☆156Updated 7 years ago
- Visual Question Answering Project with state of the art single Model performance.☆131Updated 6 years ago
- Using scene-specific contexts and region-based attention in neural image captioning☆43Updated 4 years ago
- Tensorflow Implementation of Deeper LSTM+ normalized CNN for Visual Question Answering☆99Updated 7 years ago
- The implementation of the model in paper "Skeleton Key: Image Captioning by Skeleton-Attribute Decomposition"☆26Updated 7 years ago
- Code for paper "Image Caption Generation with Text-Conditional Semantic Attention"☆60Updated 7 years ago
- Tensorflow implement of paper: Sequence to Sequence: Video to Text☆88Updated 6 years ago
- Stack-Captioning: Coarse-to-Fine Learning for Image Captioning☆62Updated 6 years ago
- Pytorch implementation of winner from VQA Chllange Workshop in CVPR'17☆163Updated 6 years ago
- visual dialog model in pytorch☆109Updated 6 years ago
- The Theano code for the CVPR 2017 paper "Semantic Compositional Networks for Visual Captioning"☆69Updated 6 years ago
- code for Stacked attention networks for image question answering☆107Updated 8 years ago
- Learning to Evaluate Image Captioning. CVPR 2018☆84Updated 6 years ago
- Caffe implementation of paper: "Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering"☆29Updated 6 years ago
- Using Semantic Compositional Networks for Video Captioning☆96Updated 6 years ago
- Contains approaches introduced in the MovieQA benchmark dataset paper☆79Updated 8 years ago
- Use transformer for captioning☆155Updated 5 years ago
- Sentence/Caption evaluation using automated metrics☆60Updated 8 years ago
- Code for detecting visual concepts in images.☆150Updated 6 years ago
- ☆78Updated 6 years ago
- ☆25Updated 7 years ago
- Supplementary material to "Top-down Visual Saliency Guided by Captions" (CVPR 2017)☆107Updated 7 years ago
- Generate captions for an image using PyTorch☆128Updated 7 years ago
- Implementation for our paper "Phrase Localization and Visual Relationship Detection with Comprehensive Image-Language Cues."☆39Updated 7 years ago