facebookresearch / htstepLinks
HT-Step is a large-scale article grounding dataset of temporal step annotations on how-to videos
☆24Updated last year
Alternatives and similar repositories for htstep
Users that are interested in htstep are comparing it to the libraries listed below
Sorting:
- Official implementation of "HowToCaption: Prompting LLMs to Transform Video Annotations at Scale." ECCV 2024☆58Updated 5 months ago
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆102Updated last year
- Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities (NeurIPS 2023)☆54Updated last year
- [ECCV 2024] EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval☆41Updated 9 months ago
- ☆27Updated 6 months ago
- [arXiv:2309.16669] Code release for "Training a Large Video Model on a Single Machine in a Day"☆138Updated 5 months ago
- Official Implementation of SnAG (CVPR 2024)☆56Updated 9 months ago
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024☆132Updated 8 months ago
- ☆109Updated last year
- Can I Trust Your Answer? Visually Grounded Video Question Answering (CVPR'24, Highlight)☆83Updated last year
- Official pytorch repository for "Knowing Where to Focus: Event-aware Transformer for Video Grounding" (ICCV 2023)☆55Updated 2 years ago
- Official implementation for "A Simple LLM Framework for Long-Range Video Question-Answering"☆106Updated last year
- Official PyTorch code of GroundVQA (CVPR'24)☆64Updated last year
- Hierarchical Video-Moment Retrieval and Step-Captioning (CVPR 2023)☆107Updated last year
- The champion solution for Ego4D Natural Language Queries Challenge in CVPR 2023☆18Updated 2 years ago
- [CVPR 2024] Context-Guided Spatio-Temporal Video Grounding☆65Updated last year
- Official implementation of "Test-Time Zero-Shot Temporal Action Localization", CVPR 2024☆69Updated last year
- code for downloading videos from HowTo100M dataset☆16Updated 4 years ago
- Code for CVPR25 paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"☆154Updated 7 months ago
- This repository contains the Adverbs in Recipes (AIR) dataset and the code published at the CVPR 23 paper: "Learning Action Changes by Me…☆13Updated 2 years ago
- [NeurIPS 2022] Egocentric Video-Language Pretraining☆253Updated last year
- Official Implementation of "Chrono: A Simple Blueprint for Representing Time in MLLMs"☆92Updated 10 months ago
- Official repo for CVPR 2022 (Oral) paper: Revisiting the "Video" in Video-Language Understanding. Contains code for the Atemporal Probe (…☆50Updated last year
- [AAAI 2025] Grounded Multi-Hop VideoQA in Long-Form Egocentric Videos☆31Updated 8 months ago
- ☆24Updated 2 years ago
- Large Language Models are Temporal and Causal Reasoners for Video Question Answering (EMNLP 2023)☆77Updated 10 months ago
- [CVPR 2025] Official PyTorch code of "Enhancing Video-LLM Reasoning via Agent-of-Thoughts Distillation".☆54Updated 8 months ago
- [NeurIPS 2022] Embracing Consistency: A One-Stage Approach for Spatio-Temporal Video Grounding☆53Updated last year
- [ECCV 22] LocVTP: Video-Text Pre-training for Temporal Localization☆39Updated 3 years ago
- ☆80Updated last year