aurooj / WSG-VQA-VLTransformersLinks
Weakly Supervised Grounding for VQA in Vision-Language Transformers
☆16Updated 2 years ago
Alternatives and similar repositories for WSG-VQA-VLTransformers
Users that are interested in WSG-VQA-VLTransformers are comparing it to the libraries listed below
Sorting:
- Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”☆48Updated 2 years ago
- ☆25Updated 3 years ago
- Improving One-stage Visual Grounding by Recursive Sub-query Construction, ECCV 2020☆87Updated 4 years ago
- Code for Greedy Gradient Ensemble for Visual Question Answering (ICCV 2021, Oral)☆26Updated 3 years ago
- [ICCV 2021] Official implementation of the paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering"☆67Updated 4 years ago
- Implementation for CVPR 2022 paper " Injecting Semantic Concepts into End-to-End Image Captionin".☆43Updated 3 years ago
- The official PyTorch code for "Relation-aware Instance Refinement for Weakly Supervised Visual Grounding" accepted by CVPR2021☆27Updated 4 years ago
- ☆30Updated 2 years ago
- Official PyTorch implementation of our CVPR 2022 paper: Beyond a Pre-Trained Object Detector: Cross-Modal Textual and Visual Context for …☆60Updated 3 years ago
- Code for our ACL2021 paper: "Check It Again: Progressive Visual Question Answering via Visual Entailment"☆31Updated 3 years ago
- ☆31Updated 4 years ago
- The Pytorch implementation for "Video-Text Pre-training with Learned Regions"☆42Updated 3 years ago
- An unofficial pytorch implementation of "TransVG: End-to-End Visual Grounding with Transformers".☆52Updated 4 years ago
- Cross Modal Retrieval with Querybank Normalisation☆56Updated last year
- A pytorch implemetation of data augmentation method for visual question answering☆21Updated 2 years ago
- ROSITA: Enhancing Vision-and-Language Semantic Alignments via Cross- and Intra-modal Knowledge Integration☆56Updated 2 years ago
- ☆88Updated 3 years ago
- Official Code for 'RSTNet: Captioning with Adaptive Attention on Visual and Non-Visual Words' (CVPR 2021)☆123Updated 2 years ago
- [CVPR 2022] The code for our paper 《Object-aware Video-language Pre-training for Retrieval》☆62Updated 3 years ago
- A Fast and Accurate One-Stage Approach to Visual Grounding, ICCV 2019 (Oral)☆148Updated 4 years ago
- Human-like Controllable Image Captioning with Verb-specific Semantic Roles.☆36Updated 3 years ago
- [ECCV'22 Poster] Explicit Image Caption Editing☆22Updated 2 years ago
- ☆15Updated last year
- Implementation of our IJCAI2022 oral paper, ER-SAN: Enhanced-Adaptive Relation Self-Attention Network for Image Captioning.☆23Updated 2 years ago
- Learning phrase grounding from captioned images through InfoNCE bound on mutual information☆74Updated 5 years ago
- ☆26Updated 3 years ago
- This repository contains code for the paper "Fine-Grained Predicates Learning for Scene Graph Generation (CVPR 2022)".☆26Updated last year
- Data Release for VALUE Benchmark☆31Updated 3 years ago
- Code and Resources for the Transformer Encoder Reasoning and Alignment Network (TERAN), accepted for publication in ACM Transactions on M…☆74Updated last year
- Colorful Prompt Tuning for Pre-trained Vision-Language Models☆49Updated 2 years ago