aimagelab / show-control-and-tell
Show, Control and Tell: A Framework for Generating Controllable and Grounded Captions. CVPR 2019
☆282Updated 2 years ago
Alternatives and similar repositories for show-control-and-tell:
Users that are interested in show-control-and-tell are comparing it to the libraries listed below
- Code for Unsupervised Image Captioning☆216Updated last year
- PyTorch implementation of Image captioning with Bottom-up, Top-down Attention☆166Updated 6 years ago
- Code for paper "Attention on Attention for Image Captioning". ICCV 2019☆333Updated 3 years ago
- ☆221Updated 2 years ago
- Automatic image captioning model based on Caffe, using features from bottom-up attention.☆245Updated last year
- Pytorch code of for our CVPR 2018 paper "Neural Baby Talk"☆523Updated 5 years ago
- Implementation of the Object Relation Transformer for Image Captioning☆177Updated 4 months ago
- PyTorch Implementation of Knowing When to Look: Adaptive Attention via a Visual Sentinal for Image Captioning☆84Updated 4 years ago
- Image Captioning based on Bottom-Up and Top-Down Attention model☆102Updated 6 years ago
- ☆129Updated 6 years ago
- Re-implement CVPR2017 paper: "dense captioning with joint inference and visual context" and minor changes in Tensorflow. (mAP 8.296 after…☆61Updated 5 years ago
- Pytorch Implementation of Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning☆107Updated 7 years ago
- Bottom-up features extractor implemented in PyTorch.☆71Updated 5 years ago
- [ICLR 2018] Learning to Count Objects in Natural Images for Visual Question Answering☆204Updated 5 years ago
- Implementation of "Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning"☆334Updated 7 years ago
- Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question Answering☆105Updated 5 years ago
- An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.☆755Updated 10 months ago
- MUREL (CVPR 2019), a multimodal relational reasoning module for VQA☆194Updated 4 years ago
- Implementation of 'X-Linear Attention Networks for Image Captioning' [CVPR 2020]☆273Updated 3 years ago
- Vision-Language Pre-training for Image Captioning and Question Answering☆417Updated 3 years ago
- [EMNLP 2018] Training for Diversity in Image Paragraph Captioning☆89Updated 5 years ago
- Good News Everyone! - CVPR 2019☆128Updated 2 years ago
- Use transformer for captioning☆155Updated 5 years ago
- Bilinear attention networks for visual question answering☆545Updated last year
- Meshed-Memory Transformer for Image Captioning. CVPR 2020☆523Updated 2 years ago
- Deep Modular Co-Attention Networks for Visual Question Answering☆447Updated 4 years ago
- Faster RCNN model in Pytorch version, pretrained on the Visual Genome with ResNet 101☆233Updated 2 years ago
- Code for our paper: Learning Conditioned Graph Structures for Interpretable Visual Question Answering☆149Updated 5 years ago
- Show, Edit and Tell: A Framework for Editing Image Captions, CVPR 2020☆81Updated 4 years ago
- MAttNet: Modular Attention Network for Referring Expression Comprehension☆293Updated 2 years ago