google-research-datasets / videoCC-data
VideoCC is a dataset containing (video-URL, caption) pairs for training video-text machine learning models. It is created using an automatic pipeline starting from the Conceptual Captions Image-Captioning Dataset.
☆76Updated last year
Related projects ⓘ
Alternatives and complementary repositories for videoCC-data
- multimodal video-audio-text generation and retrieval between every pair of modalities on the MUGEN dataset. The repo. contains the traini…☆39Updated last year
- ☆102Updated last year
- EILeV: Eliciting In-Context Learning in Vision-Language Models for Videos Through Curated Data Distributional Properties☆117Updated last week
- A PyTorch implementation of EmpiricalMVM☆39Updated 11 months ago
- ☆72Updated 6 months ago
- [CVPR'23 Highlight] AutoAD: Movie Description in Context.☆88Updated 2 weeks ago
- Command-line tool for downloading and extending the RedCaps dataset.☆45Updated 11 months ago
- Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Model☆39Updated last year
- A Unified Framework for Video-Language Understanding☆56Updated last year
- [arXiv:2309.16669] Code release for "Training a Large Video Model on a Single Machine in a Day"☆116Updated 3 months ago
- This is an official pytorch implementation of Learning To Recognize Procedural Activities with Distant Supervision. In this repository, w…☆40Updated last year
- Code release for "MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound"☆137Updated 2 years ago
- Learning to cut end-to-end pretrained modules☆28Updated 4 months ago
- Official code for our CVPR 2023 paper: Test of Time: Instilling Video-Language Models with a Sense of Time☆45Updated 5 months ago
- Supercharged BLIP-2 that can handle videos☆116Updated 11 months ago
- VPEval Codebase from Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆43Updated 11 months ago
- (wip) Use LAION-AI's CLIP "conditoned prior" to generate CLIP image embeds from CLIP text embeds.☆28Updated 2 years ago
- ☆73Updated 2 years ago
- ☆48Updated last year
- Research code for "Training Vision-Language Transformers from Captions Alone"☆33Updated 2 years ago
- Release of ImageNet-Captions☆45Updated last year
- ☆55Updated 6 months ago
- Code for the paper "GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos" published at CVPR 2024☆44Updated 8 months ago
- Official Code of ICCV 2021 Paper: Learning to Cut by Watching Movies☆51Updated 2 years ago
- ACAV100M: Automatic Curation of Large-Scale Datasets for Audio-Visual Video Representation Learning. In ICCV, 2021.☆54Updated 3 years ago
- ☆48Updated last year
- Official repository for the General Robust Image Task (GRIT) Benchmark☆50Updated last year
- Official implementation of "HowToCaption: Prompting LLMs to Transform Video Annotations at Scale." ECCV 2024☆46Updated last month
- ☆30Updated 2 months ago
- [ACCV 2024] Official Implementation of "AutoAD-Zero: A Training-Free Framework for Zero-Shot Audio Description". Junyu Xie, Tengda Han, M…☆17Updated last month