aimagelab / show-control-and-tellLinks
Show, Control and Tell: A Framework for Generating Controllable and Grounded Captions. CVPR 2019
☆284Updated 2 years ago
Alternatives and similar repositories for show-control-and-tell
Users that are interested in show-control-and-tell are comparing it to the libraries listed below
Sorting:
- Code for paper "Attention on Attention for Image Captioning". ICCV 2019☆333Updated 4 years ago
- Code for Unsupervised Image Captioning☆218Updated 2 years ago
- PyTorch implementation of Image captioning with Bottom-up, Top-down Attention☆166Updated 6 years ago
- Automatic image captioning model based on Caffe, using features from bottom-up attention.☆246Updated 2 years ago
- Implementation of the Object Relation Transformer for Image Captioning☆178Updated 9 months ago
- Pytorch code of for our CVPR 2018 paper "Neural Baby Talk"☆525Updated 6 years ago
- ☆129Updated 6 years ago
- PyTorch Implementation of Knowing When to Look: Adaptive Attention via a Visual Sentinal for Image Captioning☆85Updated 5 years ago
- Re-implement CVPR2017 paper: "dense captioning with joint inference and visual context" and minor changes in Tensorflow. (mAP 8.296 after…☆61Updated 6 years ago
- Bottom-up features extractor implemented in PyTorch.☆72Updated 5 years ago
- Image Captioning based on Bottom-Up and Top-Down Attention model☆102Updated 6 years ago
- ☆219Updated 3 years ago
- Pytorch Implementation of Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning☆107Updated 7 years ago
- [EMNLP 2018] Training for Diversity in Image Paragraph Captioning☆89Updated 5 years ago
- Use transformer for captioning☆156Updated 6 years ago
- Implementation of 'X-Linear Attention Networks for Image Captioning' [CVPR 2020]☆274Updated 3 years ago
- Baseline model for nocaps benchmark, ICCV 2019 paper "nocaps: novel object captioning at scale".☆76Updated last year
- MUREL (CVPR 2019), a multimodal relational reasoning module for VQA☆195Updated 5 years ago
- Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question Answering☆106Updated 5 years ago
- [ICLR 2018] Learning to Count Objects in Natural Images for Visual Question Answering☆207Updated 6 years ago
- Code for Discriminability objective for training descriptive captions(CVPR 2018)☆109Updated 5 years ago
- ☆190Updated 3 weeks ago
- python codes for CIDEr - Consensus-based Image Caption Evaluation☆32Updated 6 years ago
- Video Grounding and Captioning☆327Updated 3 years ago
- Code for our paper: Learning Conditioned Graph Structures for Interpretable Visual Question Answering☆149Updated 6 years ago
- Implementation of "Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning"☆337Updated 7 years ago
- Code for learning to generate stylized image captions from unaligned text☆61Updated 2 years ago
- Bilinear attention networks for visual question answering☆545Updated last year
- ☆22Updated 5 years ago
- Learning to Evaluate Image Captioning. CVPR 2018☆84Updated 7 years ago