aimagelab / show-control-and-tellLinks
Show, Control and Tell: A Framework for Generating Controllable and Grounded Captions. CVPR 2019
☆284Updated 2 years ago
Alternatives and similar repositories for show-control-and-tell
Users that are interested in show-control-and-tell are comparing it to the libraries listed below
Sorting:
- Code for Unsupervised Image Captioning☆219Updated 2 years ago
- Automatic image captioning model based on Caffe, using features from bottom-up attention.☆249Updated 2 years ago
- PyTorch implementation of Image captioning with Bottom-up, Top-down Attention☆167Updated 6 years ago
- Code for paper "Attention on Attention for Image Captioning". ICCV 2019☆335Updated 4 years ago
- PyTorch Implementation of Knowing When to Look: Adaptive Attention via a Visual Sentinal for Image Captioning☆86Updated 5 years ago
- Re-implement CVPR2017 paper: "dense captioning with joint inference and visual context" and minor changes in Tensorflow. (mAP 8.296 after…☆60Updated 6 years ago
- MUREL (CVPR 2019), a multimodal relational reasoning module for VQA☆195Updated 5 years ago
- ☆129Updated 6 years ago
- Pytorch code of for our CVPR 2018 paper "Neural Baby Talk"☆524Updated 6 years ago
- [EMNLP 2018] Training for Diversity in Image Paragraph Captioning☆90Updated 6 years ago
- Bottom-up features extractor implemented in PyTorch.☆72Updated 5 years ago
- Implementation of 'X-Linear Attention Networks for Image Captioning' [CVPR 2020]☆275Updated 4 years ago
- Image Captioning based on Bottom-Up and Top-Down Attention model☆104Updated 6 years ago
- Implementation of the Object Relation Transformer for Image Captioning☆179Updated last year
- ☆219Updated 3 years ago
- Pytorch Implementation of Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning☆108Updated 7 years ago
- [ICLR 2018] Learning to Count Objects in Natural Images for Visual Question Answering☆207Updated 6 years ago
- Use transformer for captioning☆156Updated 6 years ago
- Code for Discriminability objective for training descriptive captions(CVPR 2018)☆109Updated 5 years ago
- A pytroch reimplementation of "Bilinear Attention Network", "Intra- and Inter-modality Attention", "Learning Conditioned Graph Structures…☆295Updated last year
- Improved Fusion of Visual and Language Representations by Dense Symmetric Co-Attention for Visual Question Answering☆107Updated 5 years ago
- Code for our paper: Learning Conditioned Graph Structures for Interpretable Visual Question Answering☆150Updated 6 years ago
- Bilinear attention networks for visual question answering☆545Updated last year
- Baseline model for nocaps benchmark, ICCV 2019 paper "nocaps: novel object captioning at scale".☆76Updated 2 years ago
- A pytorch implemention of "StyleNet: Generating Attractive Visual Captions with Styles"☆63Updated 4 years ago
- Source code for the paper "Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training"☆66Updated 6 years ago
- Implementation for the AAAI2019 paper "Large-scale Visual Relationship Understanding"☆145Updated 6 years ago
- ☆192Updated 3 months ago
- Official Tensorflow Implementation of the paper "Bidirectional Attentive Fusion with Context Gating for Dense Video Captioning" in CVPR 2…☆151Updated 6 years ago
- MAttNet: Modular Attention Network for Referring Expression Comprehension☆297Updated 2 years ago