google-research-datasets / videoCC-data
VideoCC is a dataset containing (video-URL, caption) pairs for training video-text machine learning models. It is created using an automatic pipeline starting from the Conceptual Captions Image-Captioning Dataset.
☆76Updated 2 years ago
Alternatives and similar repositories for videoCC-data:
Users that are interested in videoCC-data are comparing it to the libraries listed below
- ☆106Updated 2 years ago
- ☆72Updated 9 months ago
- [CVPR'23 Highlight] AutoAD: Movie Description in Context.☆91Updated 3 months ago
- EILeV: Eliciting In-Context Learning in Vision-Language Models for Videos Through Curated Data Distributional Properties☆118Updated 3 months ago
- [ACCV 2024] Official Implementation of "AutoAD-Zero: A Training-Free Framework for Zero-Shot Audio Description". Junyu Xie, Tengda Han, M…☆23Updated 3 weeks ago
- multimodal video-audio-text generation and retrieval between every pair of modalities on the MUGEN dataset. The repo. contains the traini…☆39Updated last year
- A Unified Framework for Video-Language Understanding☆56Updated last year
- [arXiv:2309.16669] Code release for "Training a Large Video Model on a Single Machine in a Day"☆121Updated 6 months ago
- This is an official pytorch implementation of Learning To Recognize Procedural Activities with Distant Supervision. In this repository, w…☆41Updated last year
- FunQA benchmarks funny, creative, and magic videos for challenging tasks including timestamp localization, video description, reasoning, …☆96Updated 2 months ago
- A PyTorch implementation of EmpiricalMVM☆40Updated last year
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆132Updated last year
- Official code for our CVPR 2023 paper: Test of Time: Instilling Video-Language Models with a Sense of Time☆45Updated 8 months ago
- [ICLR2024] Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Model☆41Updated last month
- Command-line tool for downloading and extending the RedCaps dataset.☆46Updated last year
- Let's make a video clip☆93Updated 2 years ago
- Story-Based Retrieval with Contextual Embeddings. Largest freely available movie video dataset. [ACCV'20]☆169Updated 2 years ago
- The 1st place solution of 2022 Ego4d Natural Language Queries.☆32Updated 2 years ago
- Code release for "MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound"☆139Updated 2 years ago
- Official repository for the General Robust Image Task (GRIT) Benchmark☆51Updated last year
- Use CLIP to represent video for Retrieval Task☆69Updated 3 years ago
- ☆75Updated 2 years ago
- Official Code of ICCV 2021 Paper: Learning to Cut by Watching Movies☆51Updated 2 years ago
- Supercharged BLIP-2 that can handle videos☆117Updated last year
- Research code for "Training Vision-Language Transformers from Captions Alone"☆34Updated 2 years ago
- T2VScore: Towards A Better Metric for Text-to-Video Generation☆79Updated 10 months ago
- (wip) Use LAION-AI's CLIP "conditoned prior" to generate CLIP image embeds from CLIP text embeds.☆27Updated 2 years ago
- ☆32Updated 5 months ago
- ☆56Updated 9 months ago
- Official implementation of "HowToCaption: Prompting LLMs to Transform Video Annotations at Scale." ECCV 2024☆50Updated 4 months ago