google-research-datasets / videoCC-dataLinks
VideoCC is a dataset containing (video-URL, caption) pairs for training video-text machine learning models. It is created using an automatic pipeline starting from the Conceptual Captions Image-Captioning Dataset.
☆78Updated 2 years ago
Alternatives and similar repositories for videoCC-data
Users that are interested in videoCC-data are comparing it to the libraries listed below
Sorting:
- ☆108Updated 2 years ago
- ☆72Updated last year
- [CVPR'23 Highlight] AutoAD: Movie Description in Context.☆98Updated 7 months ago
- This is an official pytorch implementation of Learning To Recognize Procedural Activities with Distant Supervision. In this repository, w…☆42Updated 2 years ago
- A PyTorch implementation of EmpiricalMVM☆41Updated last year
- multimodal video-audio-text generation and retrieval between every pair of modalities on the MUGEN dataset. The repo. contains the traini…☆40Updated 2 years ago
- A Unified Framework for Video-Language Understanding☆57Updated last year
- MAD: A Scalable Dataset for Language Grounding in Videos from Movie Audio Descriptions☆164Updated last year
- Code release for "MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound"☆140Updated 3 years ago
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆134Updated 2 years ago
- Official Code of ICCV 2021 Paper: Learning to Cut by Watching Movies☆51Updated 2 years ago
- [ACCV 2024] Official Implementation of "AutoAD-Zero: A Training-Free Framework for Zero-Shot Audio Description". Junyu Xie, Tengda Han, M…☆25Updated 4 months ago
- EILeV: Eliciting In-Context Learning in Vision-Language Models for Videos Through Curated Data Distributional Properties☆124Updated 6 months ago
- Command-line tool for downloading and extending the RedCaps dataset.☆47Updated last year
- Supercharged BLIP-2 that can handle videos☆118Updated last year
- Use CLIP to represent video for Retrieval Task☆69Updated 4 years ago
- Learning to cut end-to-end pretrained modules☆30Updated last month
- [ICLR2024] Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Model☆43Updated 5 months ago
- Story-Based Retrieval with Contextual Embeddings. Largest freely available movie video dataset. [ACCV'20]☆178Updated 2 years ago
- Official code for our CVPR 2023 paper: Test of Time: Instilling Video-Language Models with a Sense of Time☆45Updated 11 months ago
- ☆50Updated 2 years ago
- [arXiv:2309.16669] Code release for "Training a Large Video Model on a Single Machine in a Day"☆129Updated 10 months ago
- ☆76Updated 2 years ago
- Code for the paper "GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos" published at CVPR 2024☆51Updated last year
- Official implementation of "HowToCaption: Prompting LLMs to Transform Video Annotations at Scale." ECCV 2024☆53Updated 8 months ago
- (wip) Use LAION-AI's CLIP "conditoned prior" to generate CLIP image embeds from CLIP text embeds.☆27Updated 2 years ago
- Official repository for the General Robust Image Task (GRIT) Benchmark☆54Updated 2 years ago
- [Arxiv2022] Revitalize Region Feature for Democratizing Video-Language Pre-training☆21Updated 3 years ago
- Official code for "Bridging Video-text Retrieval with Multiple Choice Questions", CVPR 2022 (Oral).☆139Updated 2 years ago
- [CVPR 2023] Official code for "Learning Procedure-aware Video Representation from Instructional Videos and Their Narrations"☆53Updated last year