aurooj / WSG-VQA-VLTransformersLinks
Weakly Supervised Grounding for VQA in Vision-Language Transformers
☆16Updated 2 years ago
Alternatives and similar repositories for WSG-VQA-VLTransformers
Users that are interested in WSG-VQA-VLTransformers are comparing it to the libraries listed below
Sorting:
- Improving One-stage Visual Grounding by Recursive Sub-query Construction, ECCV 2020☆85Updated 3 years ago
- Human-like Controllable Image Captioning with Verb-specific Semantic Roles.☆36Updated 3 years ago
- Some papers about *diverse* image (a few videos) captioning☆26Updated 2 years ago
- ☆26Updated 3 years ago
- Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”☆48Updated 2 years ago
- ☆30Updated 2 years ago
- ☆83Updated 3 years ago
- Video Graph Transformer for Video Question Answering (ECCV'22)☆48Updated 2 years ago
- A pytorch implemetation of data augmentation method for visual question answering☆21Updated 2 years ago
- The official PyTorch code for "Relation-aware Instance Refinement for Weakly Supervised Visual Grounding" accepted by CVPR2021☆27Updated 3 years ago
- Code for Greedy Gradient Ensemble for Visual Question Answering (ICCV 2021, Oral)☆26Updated 3 years ago