google-deepmind / perception_testLinks
☆232Updated 4 months ago
Alternatives and similar repositories for perception_test
Users that are interested in perception_test are comparing it to the libraries listed below
Sorting:
- ☆228Updated last year
- [arXiv:2309.16669] Code release for "Training a Large Video Model on a Single Machine in a Day"☆135Updated last month
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆188Updated last year
- This repo contains documentation and code needed to use PACO dataset: data loaders and training and evaluation scripts for objects, parts…☆286Updated last year
- ☆127Updated last year
- [NeurIPS 2022] Zero-Shot Video Question Answering via Frozen Bidirectional Language Models☆157Updated 9 months ago
- Code release for "Learning Video Representations from Large Language Models"☆537Updated 2 years ago
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆316Updated last year
- Densely Captioned Images (DCI) dataset repository.☆192Updated last year
- ☆76Updated last year
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆100Updated last year
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024☆129Updated 4 months ago
- Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR …☆286Updated 2 years ago
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆85Updated last year
- [NeurIPS 2023 D&B] VidChapters-7M: Video Chapters at Scale☆198Updated last year
- This repo contains the official implementation of ICLR 2024 paper "Is ImageNet worth 1 video? Learning strong image encoders from 1 long …☆93Updated last year
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆135Updated 2 years ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆140Updated last week
- Official implementation of "Describing Differences in Image Sets with Natural Language" (CVPR 2024 Oral)☆121Updated last year
- [TACL'23] VSR: A probing benchmark for spatial undersranding of vision-language models.☆129Updated 2 years ago
- [NeurIPS 2022] Egocentric Video-Language Pretraining☆247Updated last year
- Official implementation for "A Simple LLM Framework for Long-Range Video Question-Answering"☆100Updated 11 months ago
- GRiT: A Generative Region-to-text Transformer for Object Understanding (ECCV2024)☆335Updated last year
- [ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models☆336Updated last year
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆176Updated 3 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆331Updated last year
- [CVPR 2023] Official repository of paper titled "Fine-tuned CLIP models are efficient video learners".☆294Updated last year
- Official implementation of "HowToCaption: Prompting LLMs to Transform Video Annotations at Scale." ECCV 2024☆55Updated last month
- [CVPR 2022 Oral] TubeDETR: Spatio-Temporal Video Grounding with Transformers☆187Updated 2 years ago
- Official code for our CVPR 2023 paper: Test of Time: Instilling Video-Language Models with a Sense of Time☆46Updated last year