ezeli / Transformer_model
A pytorch implementation of Attention Is All You Need (Transformer) for image captioning.
☆12Updated 3 years ago
Alternatives and similar repositories for Transformer_model:
Users that are interested in Transformer_model are comparing it to the libraries listed below
- A pytorch implementation of "Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering" for image captioning.☆47Updated 3 years ago
- Microsoft COCO Caption Evaluation Tool - Python 3☆33Updated 5 years ago
- code for paper `MemCap: Memorizing Style Knowledge for Image Captioning`☆11Updated 4 years ago
- Bridging by Word: Image-Grounded Vocabulary Construction for Visual Captioning based in ACL2019☆17Updated 5 years ago
- Implementation of paper "Improving Image Captioning with Better Use of Caption"☆32Updated 4 years ago
- Optimized code based on M2 for faster image captioning training☆20Updated 2 years ago
- Improving One-stage Visual Grounding by Recursive Sub-query Construction, ECCV 2020☆84Updated 3 years ago
- A Pytorch implementation of the paper 'Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering'☆10Updated 4 years ago
- [ECCV 2020] Official code for "Comprehensive Image Captioning via Scene Graph Decomposition"☆97Updated 4 months ago
- Implementation of our ACMMM2019 paper, Focus Your Attention: A Bidirectional Focal Attention Network for Image-Text Matching☆37Updated last year
- Learning Fragment Self-Attention Embeddings for Image-Text Matching, in ACM MM 2019☆41Updated 5 years ago
- ☆32Updated 3 years ago
- ☆66Updated 2 years ago
- Official Code for 'RSTNet: Captioning with Adaptive Attention on Visual and Non-Visual Words' (CVPR 2021)☆122Updated 2 years ago
- Official pytorch implementation of the AAAI 2021 paper "Semantic Grouping Network for Video Captioning"☆51Updated 3 years ago
- Code for our IJCAI2020 paper: Overcoming Language Priors with Self-supervised Learning for Visual Question Answering☆49Updated 4 years ago
- Implementation of our IJCAI2022 oral paper, ER-SAN: Enhanced-Adaptive Relation Self-Attention Network for Image Captioning.☆22Updated last year
- Code for journal paper "Learning Dual Semantic Relations with Graph Attention for Image-Text Matching", TCSVT, 2020.☆72Updated 2 years ago
- A Fast and Accurate One-Stage Approach to Visual Grounding, ICCV 2019 (Oral)☆144Updated 4 years ago
- The PyTorch code of the AAAI2021 paper "Non-Autoregressive Coarse-to-Fine Video Captioning".☆58Updated last year
- An updated PyTorch implementation of hengyuan-hu's version for 'Bottom-Up and Top-Down Attention for Image Captioning and Visual Question…☆36Updated 2 years ago
- Position Focused Attention Network for Image-Text Matching☆68Updated 5 years ago
- IJCAI2020: Learning to Discretely Compose Reasoning Module Networks for Video Captioning☆79Updated 4 years ago
- Bottom-up features extractor implemented in PyTorch.☆71Updated 5 years ago
- Code for "Aligning Visual Regions and Textual Concepts for Semantic-Grounded Image Representations" (NeurIPS 2019)☆65Updated 4 years ago
- This is the implementation of self-CIDEr and LSA-based diversity metrics (only for python 2.7).☆36Updated 2 years ago
- An PyTorch reimplementation of bottom-up-attention models☆16Updated 4 years ago
- Dynamic Modality Interaction Modeling for Image-Text Retrieval. SIGIR'21☆67Updated 2 years ago
- ☆38Updated last year
- Official PyTorch implementation of our CVPR 2022 paper: Beyond a Pre-Trained Object Detector: Cross-Modal Textual and Visual Context for …☆60Updated 2 years ago