forence / Awesome-Visual-Captioning
This repository focus on Image Captioning & Video Captioning & Seq-to-Seq Learning & NLP
☆413Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for Awesome-Visual-Captioning
- Implementation of 'X-Linear Attention Networks for Image Captioning' [CVPR 2020]☆271Updated 3 years ago
- A PyTorch reimplementation of bottom-up-attention models☆292Updated 2 years ago
- Code for paper "Attention on Attention for Image Captioning". ICCV 2019☆328Updated 3 years ago
- Faster RCNN model in Pytorch version, pretrained on the Visual Genome with ResNet 101☆231Updated 2 years ago
- Grid features pre-training code for visual question answering☆268Updated 3 years ago
- Meshed-Memory Transformer for Image Captioning. CVPR 2020☆519Updated last year
- PyTorch bottom-up attention with Detectron2☆230Updated 2 years ago
- Official pytorch implementation of paper "Dual-Level Collaborative Transformer for Image Captioning" (AAAI 2021).☆195Updated 2 years ago
- ☆220Updated 2 years ago
- [CVPR 2021 Best Student Paper Honorable Mention, Oral] Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning…☆705Updated last year
- project page for VinVL☆350Updated last year
- Deep Modular Co-Attention Networks for Visual Question Answering☆443Updated 3 years ago
- Implementation of the Object Relation Transformer for Image Captioning☆176Updated 2 months ago
- PyTorch implementation of Image captioning with Bottom-up, Top-down Attention☆164Updated 5 years ago
- Code accompanying the paper "Say As You Wish: Fine-grained Control of Image Caption Generation with Abstract Scene Graphs" (Chen et al., …☆200Updated last year
- Python 3 support for the MS COCO caption evaluation tools☆302Updated 3 months ago
- A lightweight, scalable, and general framework for visual question answering research☆321Updated 3 years ago
- PyTorch code for ICCV'19 paper "Visual Semantic Reasoning for Image-Text Matching"☆294Updated 4 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆401Updated 2 years ago
- Code for Unsupervised Image Captioning☆215Updated last year
- Transformer-based image captioning extension for pytorch/fairseq☆314Updated 3 years ago
- METER: A Multimodal End-to-end TransformER Framework☆362Updated 2 years ago
- A curated list of Multimodal Captioning related research(including image captioning, video captioning, and text captioning)☆108Updated 2 years ago
- An implementation that downstreams pre-trained V+L models to VQA tasks. Now support: VisualBERT, LXMERT, and UNITER☆163Updated last year
- Code accompanying the paper "Fine-grained Video-Text Retrieval with Hierarchical Graph Reasoning".☆209Updated 4 years ago
- Research code for EMNLP 2020 paper "HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-training"☆230Updated 3 years ago
- Official Code for 'RSTNet: Captioning with Adaptive Attention on Visual and Non-Visual Words' (CVPR 2021)☆119Updated last year
- An official implementation for " UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation"☆338Updated 3 months ago
- Multi-Modal Transformer for Video Retrieval☆258Updated last month
- Image Captioning Using Transformer☆256Updated 2 years ago