ninatu / howtocaptionLinks
Official implementation of "HowToCaption: Prompting LLMs to Transform Video Annotations at Scale." ECCV 2024
☆55Updated 3 months ago
Alternatives and similar repositories for howtocaption
Users that are interested in howtocaption are comparing it to the libraries listed below
Sorting:
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆100Updated last year
- HT-Step is a large-scale article grounding dataset of temporal step annotations on how-to videos☆22Updated last year
- ☆73Updated last year
- [arXiv:2309.16669] Code release for "Training a Large Video Model on a Single Machine in a Day"☆136Updated 3 months ago
- Hierarchical Video-Moment Retrieval and Step-Captioning (CVPR 2023)☆107Updated 10 months ago
- Official This-Is-My Dataset published in CVPR 2023☆16Updated last year
- Official implementation for "A Simple LLM Framework for Long-Range Video Question-Answering"☆104Updated last year
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆190Updated last year
- ☆26Updated 4 months ago
- ☆80Updated last year
- Winner solution to Generic Event Boundary Captioning task in LOVEU Challenge (CVPR 2023 workshop)☆30Updated last year
- Official Implementation of "Chrono: A Simple Blueprint for Representing Time in MLLMs"☆92Updated 8 months ago
- [ECCV 2024] EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval☆41Updated 7 months ago
- ☆104Updated 11 months ago
- Official PyTorch implementation of the paper "CoVR: Learning Composed Video Retrieval from Web Video Captions".☆118Updated last month
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆62Updated last year
- Code and data for the paper "Emergent Visual-Semantic Hierarchies in Image-Text Representations" (ECCV 2024)☆32Updated last year
- Official pytorch repository for "Knowing Where to Focus: Event-aware Transformer for Video Grounding" (ICCV 2023)☆53Updated 2 years ago
- Official PyTorch code of GroundVQA (CVPR'24)☆64Updated last year
- (NeurIPS 2024 Spotlight) TOPA: Extend Large Language Models for Video Understanding via Text-Only Pre-Alignment☆30Updated last year
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆35Updated 2 years ago
- ☆140Updated last year
- Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities (NeurIPS 2023)☆52Updated last year
- ☆110Updated 2 years ago
- Can I Trust Your Answer? Visually Grounded Video Question Answering (CVPR'24, Highlight)☆83Updated last year
- Official repo for CVPR 2022 (Oral) paper: Revisiting the "Video" in Video-Language Understanding. Contains code for the Atemporal Probe (…☆50Updated last year
- Code for CVPR25 paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"☆146Updated 5 months ago
- Official Implementation for "SiLVR : A Simple Language-based Video Reasoning Framework"☆19Updated 3 months ago
- This repository contains the Adverbs in Recipes (AIR) dataset and the code published at the CVPR 23 paper: "Learning Action Changes by Me…☆13Updated 2 years ago
- Implementation of paper 'Helping Hands: An Object-Aware Ego-Centric Video Recognition Model'☆33Updated 2 years ago