tezansahu / VQA-With-Multimodal-TransformersLinks
Exploring multimodal fusion-type transformer models for visual question answering (on DAQUAR dataset)
☆37Updated 3 years ago
Alternatives and similar repositories for VQA-With-Multimodal-Transformers
Users that are interested in VQA-With-Multimodal-Transformers are comparing it to the libraries listed below
Sorting:
- Pytorch implementation of VQA: Visual Question Answering (https://arxiv.org/pdf/1505.00468.pdf) using VQA v2.0 dataset for open-ended ta…☆21Updated 5 years ago
- Repository for Multilingual-VQA task created during HuggingFace JAX/Flax community week.☆34Updated 4 years ago
- In-the-wild Question Answering☆15Updated 2 years ago
- An implementation that downstreams pre-trained V+L models to VQA tasks. Now support: VisualBERT, LXMERT, and UNITER☆165Updated 3 years ago
- A collection of multimodal datasets, and visual features for VQA and captionning in pytorch. Just run "pip install multimodal"☆83Updated 3 years ago
- Pytorch implementation of image captioning using transformer-based model.☆68Updated 2 years ago
- NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks, CVPR 2022 (Oral)☆49Updated last year
- This repository provides a comprehensive collection of research papers focused on multimodal representation learning, all of which have b…☆82Updated 6 months ago
- Pytorch VQA : Visual Question Answering (https://arxiv.org/pdf/1505.00468.pdf)☆98Updated 2 years ago
- Hate-CLIPper: Multimodal Hateful Meme Classification with Explicit Cross-modal Interaction of CLIP features - Accepted at EMNLP 2022 Work…☆56Updated 8 months ago
- [ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models☆98Updated last year
- CLIPxGPT Captioner is Image Captioning Model based on OpenAI's CLIP and GPT-2.☆118Updated 10 months ago
- ☆93Updated 3 years ago
- Visual Question Answering Paper List.☆53Updated 3 years ago
- A curated list of vision-and-language pre-training (VLP). :-)☆60Updated 3 years ago
- ☆65Updated 3 years ago
- ☆67Updated 2 years ago
- CapDec: SOTA Zero Shot Image Captioning Using CLIP and GPT2, EMNLP 2022 (findings)☆202Updated last year
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆279Updated 3 years ago
- Using LSTM or Transformer to solve Image Captioning in Pytorch☆79Updated 4 years ago
- ☆64Updated 4 years ago
- CLIP (Contrastive Language–Image Pre-training) for Italian☆185Updated 2 years ago
- An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities☆176Updated 3 years ago
- [TMLR 2022] High-Modality Multimodal Transformer☆117Updated last year
- The repository collects many various multi-modal transformer architectures, including image transformer, video transformer, image-languag…☆233Updated 3 years ago
- [NeurIPS'20-Competition] Detecting Hate Speech in Memes Using Multimodal Deep Learning Approaches: Prize-winning solution to Hateful Meme…☆61Updated last year
- PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)☆246Updated 6 months ago
- Implementation code of the work "Exploiting Multiple Sequence Lengths in Fast End to End Training for Image Captioning"☆94Updated last year
- Hyperparameter analysis for Image Captioning using LSTMs and Transformers☆26Updated 2 years ago
- Code for WACV 2023 paper "VLC-BERT: Visual Question Answering with Contextualized Commonsense Knowledge"☆21Updated 2 years ago