ntusteeian / VQA_CNN-LSTM
Pytorch implementation of VQA: Visual Question Answering (https://arxiv.org/pdf/1505.00468.pdf) using VQA v2.0 dataset for open-ended task
☆17Updated 4 years ago
Alternatives and similar repositories for VQA_CNN-LSTM:
Users that are interested in VQA_CNN-LSTM are comparing it to the libraries listed below
- Using LSTM or Transformer to solve Image Captioning in Pytorch☆76Updated 3 years ago
- CNN+LSTM, Attention based, and MUTAN-based models for Visual Question Answering☆74Updated 5 years ago
- Exploring multimodal fusion-type transformer models for visual question answering (on DAQUAR dataset)☆34Updated 3 years ago
- Visual Question Answering in PyTorch with various Attention Models☆20Updated 4 years ago
- Hyperparameter analysis for Image Captioning using LSTMs and Transformers☆26Updated last year
- Pytorch VQA : Visual Question Answering (https://arxiv.org/pdf/1505.00468.pdf)☆96Updated last year
- Implementation for the paper "Hierarchical Conditional Relation Networks for Video Question Answering" (Le et al., CVPR 2020, Oral)☆132Updated 6 months ago
- Deep Reinforcement Learning based Image Captioning with Embedding Reward☆27Updated 6 months ago
- An implementation that downstreams pre-trained V+L models to VQA tasks. Now support: VisualBERT, LXMERT, and UNITER☆163Updated 2 years ago
- Code of Dense Relational Captioning☆69Updated last year
- An updated PyTorch implementation of hengyuan-hu's version for 'Bottom-Up and Top-Down Attention for Image Captioning and Visual Question…☆36Updated 2 years ago
- Official PyTorch implementation of our CVPR 2022 paper: Beyond a Pre-Trained Object Detector: Cross-Modal Textual and Visual Context for …☆60Updated 2 years ago
- Implementation of the Object Relation Transformer for Image Captioning☆177Updated 5 months ago
- A collection of multimodal datasets, and visual features for VQA and captionning in pytorch. Just run "pip install multimodal"☆81Updated 2 years ago
- Image Captioning through Image Transformer☆40Updated 4 years ago
- PyTorch VQA implementation that achieved top performances in the (ECCV18) VizWiz Grand Challenge: Answering Visual Questions from Blind P…☆60Updated 6 years ago
- [EMNLP 2018] Training for Diversity in Image Paragraph Captioning☆89Updated 5 years ago
- NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks, CVPR 2022 (Oral)☆47Updated last year
- A self-evident application of the VQA task is to design systems that aid blind people with sight reliant queries. The VizWiz VQA dataset …☆15Updated last year
- A PyTorch implementation of the paper Show, Attend and Tell: Neural Image Caption Generation with Visual Attention☆82Updated 5 years ago
- Pytorch implementation of image captioning using transformer-based model.☆62Updated last year
- Code and dataset release for Park et al., Robust Change Captioning (ICCV 2019)☆48Updated 2 years ago
- ☆38Updated 2 years ago
- A PyTorch implementation of the paper Multimodal Transformer with Multiview Visual Representation for Image Captioning☆24Updated 4 years ago
- GraphVQA: Language-Guided Graph Neural Networks for Scene Graph Question Answering☆60Updated 3 years ago
- Implementation of 'End-to-End Transformer Based Model for Image Captioning' [AAAI 2022]☆67Updated 8 months ago
- PyTorch implementation of Image captioning with Bottom-up, Top-down Attention☆166Updated 6 years ago
- PyTorch Implementation of Knowing When to Look: Adaptive Attention via a Visual Sentinal for Image Captioning☆84Updated 4 years ago
- MuKEA: Multimodal Knowledge Extraction and Accumulation for Knowledge-based Visual Question Answering☆93Updated last year
- MMBERT: Multimodal BERT Pretraining for Improved Medical VQA☆35Updated 3 years ago