ninatu / howtocaptionLinks
Official implementation of "HowToCaption: Prompting LLMs to Transform Video Annotations at Scale." ECCV 2024
☆55Updated last month
Alternatives and similar repositories for howtocaption
Users that are interested in howtocaption are comparing it to the libraries listed below
Sorting:
- ☆72Updated last year
- Official implementation for "A Simple LLM Framework for Long-Range Video Question-Answering"☆100Updated 11 months ago
- HT-Step is a large-scale article grounding dataset of temporal step annotations on how-to videos☆21Updated last year
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆100Updated last year
- (NeurIPS 2024 Spotlight) TOPA: Extend Large Language Models for Video Understanding via Text-Only Pre-Alignment☆31Updated last year
- Hierarchical Video-Moment Retrieval and Step-Captioning (CVPR 2023)☆106Updated 8 months ago
- Official Implementation of "Chrono: A Simple Blueprint for Representing Time in MLLMs"☆91Updated 6 months ago
- ☆80Updated 10 months ago
- ☆101Updated 9 months ago
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆58Updated last year
- Winner solution to Generic Event Boundary Captioning task in LOVEU Challenge (CVPR 2023 workshop)☆30Updated last year
- Official PyTorch code of GroundVQA (CVPR'24)☆62Updated last year
- This repository contains the Adverbs in Recipes (AIR) dataset and the code published at the CVPR 23 paper: "Learning Action Changes by Me…☆13Updated 2 years ago
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆34Updated 2 years ago
- Official This-Is-My Dataset published in CVPR 2023☆16Updated last year
- [NeurIPS 2023] Self-Chained Image-Language Model for Video Localization and Question Answering☆188Updated last year
- [ECCV 2024] EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval☆41Updated 5 months ago
- Official implementation of CVPR 2024 paper "vid-TLDR: Training Free Token merging for Light-weight Video Transformer".☆52Updated last year
- Composed Video Retrieval☆61Updated last year
- LongVALE: Vision-Audio-Language-Event Benchmark Towards Time-Aware Omni-Modal Perception of Long Videos. (CVPR 2025))☆51Updated 3 months ago
- [CVPR 2024] Do you remember? Dense Video Captioning with Cross-Modal Memory Retrieval☆59Updated last year
- [arXiv:2309.16669] Code release for "Training a Large Video Model on a Single Machine in a Day"☆135Updated last month
- ☆25Updated 2 months ago
- Official repo for CVPR 2022 (Oral) paper: Revisiting the "Video" in Video-Language Understanding. Contains code for the Atemporal Probe (…☆51Updated last year
- Official pytorch repository for "Knowing Where to Focus: Event-aware Transformer for Video Grounding" (ICCV 2023)☆52Updated 2 years ago
- [ECCV 22] LocVTP: Video-Text Pre-training for Temporal Localization☆39Updated 3 years ago
- Can I Trust Your Answer? Visually Grounded Video Question Answering (CVPR'24, Highlight)☆80Updated last year
- [NeurIPS 2022] Zero-Shot Video Question Answering via Frozen Bidirectional Language Models☆156Updated 9 months ago
- ☆109Updated 2 years ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 10 months ago