google-research-datasets / mavericsLinks
MAVERICS (Manually-vAlidated Vq^2a Examples fRom Image-Caption datasetS) is a suite of test-only benchmarks for visual question answering (VQA).
☆13Updated 2 years ago
Alternatives and similar repositories for maverics
Users that are interested in maverics are comparing it to the libraries listed below
Sorting:
- PyTorch code for "Perceiver-VL: Efficient Vision-and-Language Modeling with Iterative Latent Attention" (WACV 2023)☆33Updated 3 years ago
- Code release for "MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound"☆146Updated 3 years ago
- Pytorch version of VidLanKD: Improving Language Understanding viaVideo-Distilled Knowledge Transfer (NeurIPS 2021))☆56Updated 3 years ago
- ☆47Updated last year
- This is an official implementation of GRIT-VLP☆20Updated 3 years ago
- NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks, CVPR 2022 (Oral)☆49Updated 2 years ago
- Localized Narratives☆86Updated 4 years ago
- Command-line tool for downloading and extending the RedCaps dataset.☆50Updated 2 years ago
- CLIP-It! Language-Guided Video Summarization☆75Updated 4 years ago
- Research code for "Training Vision-Language Transformers from Captions Alone"☆33Updated 3 years ago
- Official repository for the General Robust Image Task (GRIT) Benchmark☆54Updated 2 years ago
- Pytorch code for Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners☆116Updated 3 years ago
- Extended COCO Validation (ECCV) Caption dataset (ECCV 2022)☆56Updated last year
- Code for CVPR 2023 paper "Procedure-Aware Pretraining for Instructional Video Understanding"☆50Updated last year
- Collection of evaluation code for natural language generation.☆127Updated 4 years ago
- Research code for CVPR 2022 paper: "EMScore: Evaluating Video Captioning via Coarse-Grained and Fine-Grained Embedding Matching"☆26Updated 3 years ago
- multimodal video-audio-text generation and retrieval between every pair of modalities on the MUGEN dataset. The repo. contains the traini…☆40Updated 2 years ago
- [ICLR2024] Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Model☆43Updated last year
- Data Release for VALUE Benchmark☆30Updated 3 years ago
- DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generation Models (ICCV 2023)☆143Updated 8 months ago
- Extended Intramodal and Intermodal Semantic Similarity Judgments for MS-COCO☆54Updated 5 years ago
- ☆14Updated 3 years ago
- MERLOT: Multimodal Neural Script Knowledge Models☆225Updated 3 years ago
- PyTorch code for “TVLT: Textless Vision-Language Transformer” (NeurIPS 2022 Oral)☆124Updated 2 years ago
- Data and code for NeurIPS 2021 Paper "IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning".☆55Updated 2 years ago
- https://arxiv.org/abs/2209.15162☆53Updated 3 years ago
- VideoCC is a dataset containing (video-URL, caption) pairs for training video-text machine learning models. It is created using an automa…☆78Updated 3 years ago
- An official codebase for paper " CHAMPAGNE: Learning Real-world Conversation from Large-Scale Web Videos (ICCV 23)"☆52Updated 2 years ago
- A Unified Framework for Video-Language Understanding☆61Updated 2 years ago
- A length-controllable and non-autoregressive image captioning model.☆69Updated 4 years ago