google / video-localized-narrativesLinks
☆59Updated last year
Alternatives and similar repositories for video-localized-narratives
Users that are interested in video-localized-narratives are comparing it to the libraries listed below
Sorting:
- [CVPR 2023] HierVL Learning Hierarchical Video-Language Embeddings☆46Updated last year
- [CVPR 2023] Official code for "Learning Procedure-aware Video Representation from Instructional Videos and Their Narrations"☆53Updated last year
- Language Repository for Long Video Understanding☆31Updated 11 months ago
- A Unified Framework for Video-Language Understanding☆57Updated last year
- Code for CVPR 2023 paper "Procedure-Aware Pretraining for Instructional Video Understanding"☆49Updated 4 months ago
- Code for paper "Point and Ask: Incorporating Pointing into Visual Question Answering"☆19Updated 2 years ago
- ☆35Updated 8 months ago
- VideoCC is a dataset containing (video-URL, caption) pairs for training video-text machine learning models. It is created using an automa…☆78Updated 2 years ago
- [arXiv:2309.16669] Code release for "Training a Large Video Model on a Single Machine in a Day"☆129Updated 10 months ago
- A PyTorch implementation of EmpiricalMVM☆41Updated last year
- Hierarchical Video-Moment Retrieval and Step-Captioning (CVPR 2023)☆101Updated 4 months ago
- ☆72Updated last year
- ☆49Updated last year
- Official implementation for "A Simple LLM Framework for Long-Range Video Question-Answering"☆95Updated 7 months ago
- [ICLR2024] Codes and Models for COSA: Concatenated Sample Pretrained Vision-Language Foundation Model☆43Updated 5 months ago
- ☆58Updated last year
- (ACL'2023) MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning☆35Updated 10 months ago
- [EMNLP 2023] TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understanding☆51Updated last year
- Multi-model video-to-text by combining embeddings from Flan-T5 + CLIP + Whisper + SceneGraph. The 'backbone LLM' is pre-trained from scra…☆53Updated 2 years ago
- EILeV: Eliciting In-Context Learning in Vision-Language Models for Videos Through Curated Data Distributional Properties☆124Updated 6 months ago
- Pytorch implementation of Twelve Labs' Video Foundation Model evaluation framework & open embeddings☆27Updated 9 months ago
- Implementation of the model: "(MC-ViT)" from the paper: "Memory Consolidation Enables Long-Context Video Understanding"☆20Updated 2 months ago
- Research code for "Training Vision-Language Transformers from Captions Alone"☆34Updated 2 years ago
- Code and Dataset for the CVPRW Paper "Where did I leave my keys? — Episodic-Memory-Based Question Answering on Egocentric Videos"☆25Updated last year
- 🤖 [ICLR'25] Multimodal Video Understanding Framework (MVU)☆43Updated 4 months ago
- Code release for the paper "Egocentric Video Task Translation" (CVPR 2023 Highlight)☆32Updated last year
- Code and data for the paper: Learning Action and Reasoning-Centric Image Editing from Videos and Simulation☆28Updated 4 months ago
- [ICLR 2025] Knowing Your Target: Target-Aware Transformer Makes Better Spatio-Temporal Video Grounding☆19Updated 2 months ago
- Code for “Pretrained Language Models as Visual Planners for Human Assistance”☆61Updated last year
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆98Updated 11 months ago