SatyamGaba / visual_question_answeringLinks
Visual Question Answering in PyTorch with various Attention Models
☆20Updated 5 years ago
Alternatives and similar repositories for visual_question_answering
Users that are interested in visual_question_answering are comparing it to the libraries listed below
Sorting:
- Pytorch implementation of VQA: Visual Question Answering (https://arxiv.org/pdf/1505.00468.pdf) using VQA v2.0 dataset for open-ended ta…☆21Updated 5 years ago
- A simplified pytorch version of densecap☆42Updated last year
- CNN+LSTM, Attention based, and MUTAN-based models for Visual Question Answering☆77Updated 6 years ago
- Using VideoBERT to tackle video prediction☆134Updated 4 years ago
- [ICCV 2021 Oral + TPAMI] Just Ask: Learning to Answer Questions from Millions of Narrated Videos☆126Updated 2 years ago
- An implementation that downstreams pre-trained V+L models to VQA tasks. Now support: VisualBERT, LXMERT, and UNITER☆165Updated 3 years ago
- Pytorch implementation of image captioning using transformer-based model.☆68Updated 2 years ago
- Image Captioning Using Transformer☆271Updated 3 years ago
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆188Updated 9 months ago
- Code of Dense Relational Captioning☆69Updated 2 years ago
- A reading list of papers about Visual Question Answering.☆35Updated 3 years ago
- Using LSTM or Transformer to solve Image Captioning in Pytorch☆79Updated 4 years ago
- A length-controllable and non-autoregressive image captioning model.☆69Updated 4 years ago
- Situation With Groundings (SWiG) dataset and Joint Situation Localizer (JSL)☆69Updated 4 years ago
- Pytorch VQA : Visual Question Answering (https://arxiv.org/pdf/1505.00468.pdf)☆98Updated 2 years ago
- GRIT: Faster and Better Image-captioning Transformer (ECCV 2022)☆198Updated 2 years ago
- SimVLM ---SIMPLE VISUAL LANGUAGE MODEL PRETRAINING WITH WEAK SUPERVISION☆36Updated 3 years ago
- A self-evident application of the VQA task is to design systems that aid blind people with sight reliant queries. The VizWiz VQA dataset …☆15Updated 2 years ago
- Implementation of 'End-to-End Transformer Based Model for Image Captioning' [AAAI 2022]☆69Updated last year
- [CVPR21] Visual Semantic Role Labeling for Video Understanding (https://arxiv.org/abs/2104.00990)☆61Updated 4 years ago
- Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps[AAAI2021]☆57Updated 3 years ago
- ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration☆56Updated 2 years ago
- Code for CVPR 2023 paper "Procedure-Aware Pretraining for Instructional Video Understanding"☆50Updated last year
- Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone☆131Updated 2 years ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆209Updated 3 years ago
- A task-agnostic vision-language architecture as a step towards General Purpose Vision☆92Updated 4 years ago
- [CVPR 2020] Transform and Tell: Entity-Aware News Image Captioning☆93Updated last year
- A collection of multimodal datasets, and visual features for VQA and captionning in pytorch. Just run "pip install multimodal"☆84Updated 3 years ago
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆203Updated 2 years ago
- Official PyTorch implementation of our CVPR 2022 paper: Beyond a Pre-Trained Object Detector: Cross-Modal Textual and Visual Context for …☆61Updated 3 years ago