bpiyush / TestOfTimeLinks
Official code for our CVPR 2023 paper: Test of Time: Instilling Video-Language Models with a Sense of Time
☆45Updated last year
Alternatives and similar repositories for TestOfTime
Users that are interested in TestOfTime are comparing it to the libraries listed below
Sorting:
- [CVPR 2023] Official code for "Learning Procedure-aware Video Representation from Instructional Videos and Their Narrations"☆54Updated 2 years ago
- Code for CVPR 2023 paper "Procedure-Aware Pretraining for Instructional Video Understanding"☆50Updated 7 months ago
- A PyTorch implementation of EmpiricalMVM☆41Updated last year
- VideoCC is a dataset containing (video-URL, caption) pairs for training video-text machine learning models. It is created using an automa…☆78Updated 2 years ago
- Code release for "MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound"☆143Updated 3 years ago
- ☆76Updated 2 years ago
- PyTorch code for “TVLT: Textless Vision-Language Transformer” (NeurIPS 2022 Oral)☆125Updated 2 years ago
- [CVPR 2023] HierVL Learning Hierarchical Video-Language Embeddings☆46Updated 2 years ago
- ☆120Updated 2 years ago
- A Unified Framework for Video-Language Understanding☆57Updated 2 years ago
- [ICCV 2021 Oral + TPAMI] Just Ask: Learning to Answer Questions from Millions of Narrated Videos☆123Updated last year
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆99Updated last year
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆188Updated last year
- [arXiv:2309.16669] Code release for "Training a Large Video Model on a Single Machine in a Day"☆133Updated this week
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆33Updated 2 years ago
- This is an official pytorch implementation of Learning To Recognize Procedural Activities with Distant Supervision. In this repository, w…☆43Updated 2 years ago
- This repo contains the code for the recipe of the winning entry to the Ego4d VQ2D challenge at CVPR 2022.☆41Updated 2 years ago
- ☆52Updated 3 years ago
- [NeurIPS 2022] Zero-Shot Video Question Answering via Frozen Bidirectional Language Models☆157Updated 8 months ago
- A task-agnostic vision-language architecture as a step towards General Purpose Vision☆92Updated 4 years ago
- DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generation Models (ICCV 2023)☆141Updated 2 months ago
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆135Updated 2 years ago
- MAD: A Scalable Dataset for Language Grounding in Videos from Movie Audio Descriptions☆167Updated last year
- An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities☆172Updated 3 years ago
- [TACL'23] VSR: A probing benchmark for spatial undersranding of vision-language models.☆128Updated 2 years ago
- [CVPR 2022] Visual Abductive Reasoning☆122Updated 10 months ago
- [CVPR'23 Highlight] AutoAD: Movie Description in Context.☆100Updated 9 months ago
- https://arxiv.org/abs/2209.15162☆52Updated 2 years ago
- [CVPR21] Visual Semantic Role Labeling for Video Understanding (https://arxiv.org/abs/2104.00990)☆61Updated 4 years ago
- ☆109Updated 2 years ago