google-research-datasets / mavericsLinks
MAVERICS (Manually-vAlidated Vq^2a Examples fRom Image-Caption datasetS) is a suite of test-only benchmarks for visual question answering (VQA).
☆13Updated 2 years ago
Alternatives and similar repositories for maverics
Users that are interested in maverics are comparing it to the libraries listed below
Sorting:
- PyTorch code for "Perceiver-VL: Efficient Vision-and-Language Modeling with Iterative Latent Attention" (WACV 2023)☆33Updated 2 years ago
- Code release for "MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound"☆146Updated 3 years ago
- NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks, CVPR 2022 (Oral)☆49Updated 2 years ago
- Pytorch version of VidLanKD: Improving Language Understanding viaVideo-Distilled Knowledge Transfer (NeurIPS 2021))☆56Updated 2 years ago
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆116Updated 3 years ago
- Extended COCO Validation (ECCV) Caption dataset (ECCV 2022)☆56Updated last year
- Localized Narratives☆86Updated 4 years ago
- Data Release for VALUE Benchmark☆30Updated 3 years ago
- ☆47Updated last year
- This is an official pytorch implementation of Learning To Recognize Procedural Activities with Distant Supervision. In this repository, w…☆43Updated 2 years ago
- Research code for "Training Vision-Language Transformers from Captions Alone"☆33Updated 3 years ago
- A task-agnostic vision-language architecture as a step towards General Purpose Vision☆92Updated 4 years ago
- Command-line tool for downloading and extending the RedCaps dataset.☆50Updated 2 years ago
- ☆14Updated 3 years ago
- Collection of evaluation code for natural language generation.☆127Updated 4 years ago
- Code for CVPR 2023 paper "Procedure-Aware Pretraining for Instructional Video Understanding"☆50Updated last year
- This is an official implementation of GRIT-VLP☆20Updated 3 years ago
- CLIP-It! Language-Guided Video Summarization☆75Updated 4 years ago
- [ACL 2021] mTVR: Multilingual Video Moment Retrieval☆27Updated 3 years ago
- A Unified Framework for Video-Language Understanding☆61Updated 2 years ago
- A collection of videos annotated with timelines where each video is divided into segments, and each segment is labelled with a short free…☆29Updated 4 years ago
- Pytorch version of DeCEMBERT: Learning from Noisy Instructional Videos via Dense Captions and Entropy Minimization (NAACL 2021)☆17Updated 3 years ago
- multimodal video-audio-text generation and retrieval between every pair of modalities on the MUGEN dataset. The repo. contains the traini…☆40Updated 2 years ago
- ☆38Updated 2 years ago
- Data repository for the VALSE benchmark.☆37Updated last year
- DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generation Models (ICCV 2023)☆143Updated 7 months ago
- Code for ACL 2023 Oral Paper: ManagerTower: Aggregating the Insights of Uni-Modal Experts for Vision-Language Representation Learning☆13Updated 5 months ago
- Data and code for NeurIPS 2021 Paper "IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning".☆55Updated 2 years ago
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆136Updated 2 years ago
- PyTorch code for “TVLT: Textless Vision-Language Transformer” (NeurIPS 2022 Oral)☆124Updated 2 years ago