antoyang / VidChaptersLinks
[NeurIPS 2023 D&B] VidChapters-7M: Video Chapters at Scale
☆190Updated last year
Alternatives and similar repositories for VidChapters
Users that are interested in VidChapters are comparing it to the libraries listed below
Sorting:
- An official implementation for "X-CLIP: End-to-End Multi-grained Contrastive Learning for Video-Text Retrieval"☆163Updated last year
- [NeurIPS 2022] Zero-Shot Video Question Answering via Frozen Bidirectional Language Models☆157Updated 6 months ago
- Hierarchical Video-Moment Retrieval and Step-Captioning (CVPR 2023)☆101Updated 5 months ago
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆185Updated last year
- [ICCV2023 Oral] Unmasked Teacher: Towards Training-Efficient Video Foundation Models☆331Updated last year
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆256Updated last year
- [ACL 2023] Official PyTorch code for Singularity model in "Revealing Single Frame Bias for Video-and-Language Learning"☆134Updated 2 years ago
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆99Updated 11 months ago
- Official Implementation of "Chrono: A Simple Blueprint for Representing Time in MLLMs"☆88Updated 3 months ago
- [CVPR 2022 Oral] TubeDETR: Spatio-Temporal Video Grounding with Transformers☆180Updated last year
- Official pytorch repository for CG-DETR "Correlation-guided Query-Dependency Calibration in Video Representation Learning for Temporal Gr…☆133Updated 10 months ago
- EILeV: Eliciting In-Context Learning in Vision-Language Models for Videos Through Curated Data Distributional Properties☆125Updated 7 months ago
- Supercharged BLIP-2 that can handle videos☆118Updated last year
- [NeurIPS 2021] Moment-DETR code and QVHighlights dataset☆310Updated last year
- MAD: A Scalable Dataset for Language Grounding in Videos from Movie Audio Descriptions☆165Updated last year
- ☆179Updated 8 months ago
- FunQA benchmarks funny, creative, and magic videos for challenging tasks including timestamp localization, video description, reasoning, …☆102Updated 6 months ago
- ☆176Updated 2 years ago
- [TPAMI2024] Codes and Models for VALOR: Vision-Audio-Language Omni-Perception Pretraining Model and Dataset☆292Updated 6 months ago
- [CVPR 2023] Official repository of paper titled "Fine-tuned CLIP models are efficient video learners".☆278Updated last year
- [ICCV 2023] RLIPv2: Fast Scaling of Relational Language-Image Pre-training☆128Updated last year
- ☆108Updated 2 years ago
- Code for CVPR25 paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"☆120Updated this week
- Align and Prompt: Video-and-Language Pre-training with Entity Prompts☆188Updated last month
- [CVPR2024] The official implementation of AdaTAD: End-to-End Temporal Action Detection with 1B Parameters Across 1000 Frames☆37Updated 11 months ago
- Summary about Video-to-Text datasets. This repository is part of the review paper *Bridging Vision and Language from the Video-to-Text Pe…☆124Updated last year
- Official implementation for "A Simple LLM Framework for Long-Range Video Question-Answering"☆95Updated 8 months ago
- A PyTorch implementation of EmpiricalMVM☆41Updated last year
- [ICCV 2023] UniVTG: Towards Unified Video-Language Temporal Grounding☆355Updated last year
- [NIPS2023] Code and Model for VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset☆279Updated last year