bytedance / VTVQALinks
Towards Video Text Visual Question Answering: Benchmark and Baseline
☆40Updated last year
Alternatives and similar repositories for VTVQA
Users that are interested in VTVQA are comparing it to the libraries listed below
Sorting:
- UniTAB: Unifying Text and Box Outputs for Grounded VL Modeling, ECCV 2022 (Oral Presentation)☆89Updated 2 years ago
- ☆110Updated 2 years ago
- Implementation of LaTr: Layout-aware transformer for scene-text VQA,a novel multimodal architecture for Scene Text Visual Question Answer…☆55Updated last year
- [SIGIR 2022] CenterCLIP: Token Clustering for Efficient Text-Video Retrieval.☆133Updated 3 years ago
- A Unified Framework for Video-Language Understanding☆61Updated 2 years ago
- [CVPR 2022] The code for our paper 《Object-aware Video-language Pre-training for Retrieval》☆62Updated 3 years ago
- [CVPR2023] All in One: Exploring Unified Video-Language Pre-training☆281Updated 2 years ago
- [ICLR2024] Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Model☆43Updated 11 months ago
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆136Updated 2 years ago
- [CVPR-2023] The official dataset of Advancing Visual Grounding with Scene Knowledge: Benchmark and Method.☆32Updated 2 years ago
- Offical PyTorch implementation of Clover: Towards A Unified Video-Language Alignment and Fusion Model (CVPR2023)☆40Updated 2 years ago
- 🦩 Visual Instruction Tuning with Polite Flamingo - training multi-modal LLMs to be both clever and polite! (AAAI-24 Oral)☆64Updated 2 years ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆139Updated 2 years ago
- ☆133Updated last year
- A PyTorch implementation of VIOLET☆140Updated last year
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆188Updated 7 months ago
- TAP: Text-Aware Pre-training for Text-VQA and Text-Caption, CVPR 2021 (Oral)☆73Updated 2 years ago
- [CVPR2023] The code for 《Position-guided Text Prompt for Vision-Language Pre-training》☆152Updated 2 years ago
- The HC-STVG Dataset☆61Updated 2 years ago
- The Pytorch implementation for "Video-Text Pre-training with Learned Regions"☆42Updated 3 years ago
- (ACL'2023) MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning☆36Updated last year
- [ICCV 2021 Oral + TPAMI] Just Ask: Learning to Answer Questions from Millions of Narrated Videos☆125Updated 2 years ago
- Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”☆48Updated 3 years ago
- [ECCV2022] Contrastive Vision-Language Pre-training with Limited Resources☆45Updated 3 years ago
- Use CLIP to represent video for Retrieval Task☆70Updated 4 years ago
- ☆30Updated last year
- This repository contains the dataset, codebase, and benchmarks for our paper: <CNVid-3.5M: Build, Filter, and Pre-train the Large-scale P…☆25Updated 2 years ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆208Updated 2 years ago
- Product1M☆90Updated 3 years ago
- Research code for "Training Vision-Language Transformers from Captions Alone"☆33Updated 3 years ago