google-research-datasets / videoCC-dataLinks
VideoCC is a dataset containing (video-URL, caption) pairs for training video-text machine learning models. It is created using an automatic pipeline starting from the Conceptual Captions Image-Captioning Dataset.
☆78Updated 3 years ago
Alternatives and similar repositories for videoCC-data
Users that are interested in videoCC-data are comparing it to the libraries listed below
Sorting:
- ☆110Updated 3 years ago
- A Unified Framework for Video-Language Understanding☆61Updated 2 years ago
- EILeV: Eliciting In-Context Learning in Vision-Language Models for Videos Through Curated Data Distributional Properties☆131Updated last year
- Story-Based Retrieval with Contextual Embeddings. Largest freely available movie video dataset. [ACCV'20]☆194Updated 3 years ago
- ☆73Updated last year
- Learning to cut end-to-end pretrained modules☆33Updated 9 months ago
- [CVPR'23 Highlight] AutoAD: Movie Description in Context.☆102Updated last year
- Supercharged BLIP-2 that can handle videos☆123Updated 2 years ago
- [ICLR2024] Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Model☆43Updated last year
- A task-agnostic vision-language architecture as a step towards General Purpose Vision☆92Updated 4 years ago
- A PyTorch implementation of EmpiricalMVM☆41Updated 2 years ago
- This is an official pytorch implementation of Learning To Recognize Procedural Activities with Distant Supervision. In this repository, w…☆43Updated 2 years ago
- Use CLIP to represent video for Retrieval Task☆70Updated 4 years ago
- ☆61Updated 4 years ago
- Official Code of ICCV 2021 Paper: Learning to Cut by Watching Movies☆50Updated 3 years ago
- multimodal video-audio-text generation and retrieval between every pair of modalities on the MUGEN dataset. The repo. contains the traini…☆40Updated 2 years ago
- [arXiv:2309.16669] Code release for "Training a Large Video Model on a Single Machine in a Day"☆138Updated 4 months ago
- Using LLMs and pre-trained caption models for super-human performance on image captioning.☆42Updated 2 years ago
- FuseCap: Leveraging Large Language Models for Enriched Fused Image Captions☆56Updated last year
- Official code for our CVPR 2023 paper: Test of Time: Instilling Video-Language Models with a Sense of Time☆46Updated last year
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆141Updated last month
- ☆76Updated 3 years ago
- FunQA benchmarks funny, creative, and magic videos for challenging tasks including timestamp localization, video description, reasoning, …☆104Updated 3 weeks ago
- (wip) Use LAION-AI's CLIP "conditoned prior" to generate CLIP image embeds from CLIP text embeds.☆29Updated 3 years ago
- MAD: A Scalable Dataset for Language Grounding in Videos from Movie Audio Descriptions☆173Updated 2 years ago
- Easily compute clip embeddings from video frames☆147Updated 2 years ago
- Code release for "MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound"☆146Updated 3 years ago
- [CVPR-2023] The official dataset of Advancing Visual Grounding with Scene Knowledge: Benchmark and Method.☆33Updated 2 years ago
- Let's make a video clip☆96Updated 3 years ago
- Code for the paper "GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos" published at CVPR 2024☆52Updated last year