ninatu / howtocaptionLinks
Official implementation of "HowToCaption: Prompting LLMs to Transform Video Annotations at Scale." ECCV 2024
☆55Updated 2 months ago
Alternatives and similar repositories for howtocaption
Users that are interested in howtocaption are comparing it to the libraries listed below
Sorting:
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆99Updated last year
- ☆72Updated last year
- Official implementation for "A Simple LLM Framework for Long-Range Video Question-Answering"☆101Updated 11 months ago
- [arXiv:2309.16669] Code release for "Training a Large Video Model on a Single Machine in a Day"☆135Updated 2 months ago
- HT-Step is a large-scale article grounding dataset of temporal step annotations on how-to videos☆21Updated last year
- ☆80Updated 11 months ago
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆188Updated last year
- Official Implementation of "Chrono: A Simple Blueprint for Representing Time in MLLMs"☆91Updated 7 months ago
- Official This-Is-My Dataset published in CVPR 2023☆16Updated last year
- Hierarchical Video-Moment Retrieval and Step-Captioning (CVPR 2023)☆106Updated 9 months ago
- ☆102Updated 9 months ago
- [ECCV 2024] EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval☆41Updated 6 months ago
- ☆25Updated 3 months ago
- Code and data for the paper "Emergent Visual-Semantic Hierarchies in Image-Text Representations" (ECCV 2024)☆31Updated last year
- Official PyTorch code of GroundVQA (CVPR'24)☆64Updated last year
- Winner solution to Generic Event Boundary Captioning task in LOVEU Challenge (CVPR 2023 workshop)☆30Updated last year
- Official PyTorch implementation of the paper "CoVR: Learning Composed Video Retrieval from Web Video Captions".☆115Updated 2 weeks ago
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆59Updated last year
- Can I Trust Your Answer? Visually Grounded Video Question Answering (CVPR'24, Highlight)☆82Updated last year
- Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities (NeurIPS 2023)☆48Updated last year
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆35Updated 2 years ago
- Official pytorch repository for "Knowing Where to Focus: Event-aware Transformer for Video Grounding" (ICCV 2023)☆52Updated 2 years ago
- [NeurIPS 2022] Zero-Shot Video Question Answering via Frozen Bidirectional Language Models☆156Updated 10 months ago
- (NeurIPS 2024 Spotlight) TOPA: Extend Large Language Models for Video Understanding via Text-Only Pre-Alignment☆31Updated last year
- Official implementation of CVPR 2024 paper "vid-TLDR: Training Free Token merging for Light-weight Video Transformer".☆52Updated this week
- [AAAI 2025] VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding☆113Updated 10 months ago
- [ECCV 22] LocVTP: Video-Text Pre-training for Temporal Localization☆39Updated 3 years ago
- Code for CVPR25 paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"☆142Updated 4 months ago
- ☆26Updated 3 months ago
- Official repo for CVPR 2022 (Oral) paper: Revisiting the "Video" in Video-Language Understanding. Contains code for the Atemporal Probe (…☆51Updated last year