tezansahu / VQA-With-Multimodal-Transformers
Exploring multimodal fusion-type transformer models for visual question answering (on DAQUAR dataset)
☆34Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for VQA-With-Multimodal-Transformers
- Pytorch implementation of VQA: Visual Question Answering (https://arxiv.org/pdf/1505.00468.pdf) using VQA v2.0 dataset for open-ended ta…☆17Updated 4 years ago
- Using LSTM or Transformer to solve Image Captioning in Pytorch☆75Updated 3 years ago
- Pytorch implementation of image captioning using transformer-based model.☆61Updated last year
- VQA-Med 2020☆13Updated last year
- Pytorch VQA : Visual Question Answering (https://arxiv.org/pdf/1505.00468.pdf)☆95Updated last year
- In-the-wild Question Answering☆15Updated last year
- A collection of multimodal datasets, and visual features for VQA and captionning in pytorch. Just run "pip install multimodal"☆79Updated 2 years ago
- Visual Question Answering in PyTorch with various Attention Models☆20Updated 4 years ago
- NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks, CVPR 2022 (Oral)☆44Updated 9 months ago
- Implementation of the paper CPTR : FULL TRANSFORMER NETWORK FOR IMAGE CAPTIONING☆27Updated 2 years ago
- Repository for Multilingual-VQA task created during HuggingFace JAX/Flax community week.☆34Updated 3 years ago
- Visual Question Answering Paper List.☆51Updated 2 years ago
- [ICCV 2021 Oral + TPAMI] Just Ask: Learning to Answer Questions from Millions of Narrated Videos☆117Updated last year
- An implementation that downstreams pre-trained V+L models to VQA tasks. Now support: VisualBERT, LXMERT, and UNITER☆163Updated last year
- Hyperparameter analysis for Image Captioning using LSTMs and Transformers☆27Updated last year
- ☆63Updated 2 years ago
- ☆91Updated last year
- Code of Dense Relational Captioning☆67Updated last year
- An unofficial implementation of the CVPR 2020 paper Multimodal Categorization of Crisis Events in Social Media☆12Updated 2 years ago
- A self-evident application of the VQA task is to design systems that aid blind people with sight reliant queries. The VizWiz VQA dataset …☆14Updated 11 months ago
- Code for WACV 2023 paper "VLC-BERT: Visual Question Answering with Contextualized Commonsense Knowledge"☆21Updated last year
- GRIT: Faster and Better Image-captioning Transformer (ECCV 2022)☆185Updated last year
- ☆58Updated last year
- Detecting Hate Speech in Memes Using Multimodal Deep Learning Approaches: Prize-winning solution to Hateful Memes Challenge. https://arxi…☆54Updated 9 months ago
- PyTorch code for “TVLT: Textless Vision-Language Transformer” (NeurIPS 2022 Oral)☆120Updated last year
- Implemented 3 different architectures to tackle the Image Caption problem, i.e, Merged Encoder-Decoder - Bahdanau Attention - Transformer…☆41Updated 3 years ago
- Hate-CLIPper: Multimodal Hateful Meme Classification with Explicit Cross-modal Interaction of CLIP features - Accepted at EMNLP 2022 Work…☆42Updated last year
- An Empirical Study of GPT-3 for Few-Shot Knowledge-Based VQA, AAAI 2022 (Oral)☆84Updated 2 years ago
- A length-controllable and non-autoregressive image captioning model.☆66Updated 3 years ago