ronghanghu / vqa-maskrcnn-benchmark-m4cLinks
Used in M4C feature extraction script: https://github.com/facebookresearch/mmf/blob/project/m4c/projects/M4C/scripts/extract_ocr_frcn_feature.py
☆13Updated 5 years ago
Alternatives and similar repositories for vqa-maskrcnn-benchmark-m4c
Users that are interested in vqa-maskrcnn-benchmark-m4c are comparing it to the libraries listed below
Sorting:
- Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps[AAAI2021]☆57Updated 3 years ago
- ☆188Updated last year
- TAP: Text-Aware Pre-training for Text-VQA and Text-Caption, CVPR 2021 (Oral)☆72Updated 2 years ago
- The imdb files with SBD-Trans OCR for TextVQA dataset.☆11Updated 3 years ago
- RUArt: A Novel Text-Centered Solution for Text-Based Visual Question Answering☆10Updated 2 years ago
- A modular framework for Visual Question Answering research by the FAIR A-STAR team☆45Updated 3 years ago
- STVQA and TextVQA OCR results from Amazon Text in Image pipeline☆11Updated 2 years ago
- ☆16Updated 3 years ago
- Implementation of LaTr: Layout-aware transformer for scene-text VQA,a novel multimodal architecture for Scene Text Visual Question Answer…☆53Updated 8 months ago
- A PyTorch reimplementation of bottom-up-attention models☆301Updated 3 years ago
- A pytorch implementation of "Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering" for image captioning.☆47Updated 3 years ago
- Official code for paper "Spatially Aware Multimodal Transformers for TextVQA" published at ECCV, 2020.☆64Updated 3 years ago
- ☆67Updated 2 years ago
- A pytorch implementation of Attention Is All You Need (Transformer) for image captioning.☆12Updated 3 years ago
- An updated PyTorch implementation of hengyuan-hu's version for 'Bottom-Up and Top-Down Attention for Image Captioning and Visual Question…☆36Updated 3 years ago
- Scene Text Aware Cross Modal Retrieval (StacMR)☆25Updated 3 years ago
- Grid features pre-training code for visual question answering☆269Updated 3 years ago
- PyTorch bottom-up attention with Detectron2☆233Updated 3 years ago
- A paper list of image captioning.☆22Updated 3 years ago
- ☆21Updated 2 years ago
- An implementation that downstreams pre-trained V+L models to VQA tasks. Now support: VisualBERT, LXMERT, and UNITER☆164Updated 2 years ago
- the code for paper: A Symmetric Dual Encoding Dense Retrieval Framework for Knowledge-Intensive Visual Question Answering☆12Updated last year
- Faster RCNN model in Pytorch version, pretrained on the Visual Genome with ResNet 101☆237Updated 2 years ago
- Official pytorch implementation of paper "Dual-Level Collaborative Transformer for Image Captioning" (AAAI 2021).☆199Updated 3 years ago
- ☆38Updated 2 years ago
- Official Code for 'RSTNet: Captioning with Adaptive Attention on Visual and Non-Visual Words' (CVPR 2021)☆123Updated 2 years ago
- Flickr30K Entities Dataset☆177Updated 6 years ago
- Implementation of 'X-Linear Attention Networks for Image Captioning' [CVPR 2020]☆274Updated 3 years ago
- project page for VinVL☆356Updated last year
- Implementation of 'End-to-End Transformer Based Model for Image Captioning' [AAAI 2022]☆67Updated last year