antoyang / VidChaptersLinks
[NeurIPS 2023 D&B] VidChapters-7M: Video Chapters at Scale
☆194Updated last year
Alternatives and similar repositories for VidChapters
Users that are interested in VidChapters are comparing it to the libraries listed below
Sorting:
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆99Updated last year
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆188Updated last year
- [NeurIPS 2022] Zero-Shot Video Question Answering via Frozen Bidirectional Language Models☆158Updated 8 months ago
- [ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models☆336Updated last year
- Hierarchical Video-Moment Retrieval and Step-Captioning (CVPR 2023)☆105Updated 7 months ago
- An official implementation for "X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval"☆171Updated last year
- Official implementation for "A Simple LLM Framework for Long-Range Video Question-Answering"☆100Updated 10 months ago
- Official Implementation of "Chrono: A Simple Blueprint for Representing Time in MLLMs"☆91Updated 5 months ago
- ☆182Updated 10 months ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆257Updated 3 weeks ago
- Summary about Video-to-Text datasets. This repository is part of the review paper *Bridging Vision and Language from the Video-to-Text Pe…☆126Updated last year
- FunQA benchmarks funny, creative, and magic videos for challenging tasks including timestamp localization, video description, reasoning, …☆102Updated 8 months ago
- [ICCV 2023] UniVTG: Towards Unified Video-Language Temporal Grounding☆360Updated last year
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆135Updated 2 years ago
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024☆129Updated 3 months ago
- ☆139Updated 11 months ago
- [AAAI 2025] VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding☆111Updated 8 months ago
- ☆76Updated 9 months ago
- mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)☆229Updated 2 years ago
- MAD: A Scalable Dataset for Language Grounding in Videos from Movie Audio Descriptions☆167Updated last year
- ☆72Updated last year
- EILeV: Eliciting In-Context Learning in Vision-Language Models for Videos Through Curated Data Distributional Properties☆128Updated 9 months ago
- Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval [ICCV'21]☆370Updated 3 years ago
- Official implementation of "HowToCaption: Prompting LLMs to Transform Video Annotations at Scale." ECCV 2024☆55Updated last week
- GRiT: A Generative Region-to-text Transformer for Object Understanding (ECCV2024)