facebookresearch / htstep
HT-Step is a large-scale article grounding dataset of temporal step annotations on how-to videos
☆18Updated last year
Alternatives and similar repositories for htstep:
Users that are interested in htstep are comparing it to the libraries listed below
- Official implementation of "HowToCaption: Prompting LLMs to Transform Video Annotations at Scale." ECCV 2024☆53Updated 7 months ago
- Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities (NeurIPS 2023)☆42Updated last year
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆97Updated 10 months ago
- Code implementation for our ECCV, 2022 paper titled "My View is the Best View: Procedure Learning from Egocentric Videos"☆28Updated last year
- [arXiv:2309.16669] Code release for "Training a Large Video Model on a Single Machine in a Day"☆128Updated 9 months ago
- Action Scene Graphs for Long-Form Understanding of Egocentric Videos (CVPR 2024)☆38Updated 3 weeks ago
- [ECCV 2024] EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval☆36Updated 3 weeks ago
- Code and data release for the paper "Learning Fine-grained View-Invariant Representations from Unpaired Ego-Exo Videos via Temporal Align…☆17Updated last year
- The official implementation of Error Detection in Egocentric Procedural Task Videos☆16Updated 8 months ago
- This repository contains the Adverbs in Recipes (AIR) dataset and the code published at the CVPR 23 paper: "Learning Action Changes by Me…☆13Updated last year
- The champion solution for Ego4D Natural Language Queries Challenge in CVPR 2023☆17Updated last year
- ☆23Updated 2 weeks ago
- The official PyTorch implementation of the IEEE/CVF Computer Vision and Pattern Recognition (CVPR) '24 paper PREGO: online mistake detect…☆23Updated 3 weeks ago
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆57Updated 8 months ago
- ☆24Updated last year
- Official pytorch repository for "Knowing Where to Focus: Event-aware Transformer for Video Grounding" (ICCV 2023)☆50Updated last year
- Code accompanying Ego-Exo: Transferring Visual Representations from Third-person to First-person Videos (CVPR 2021)☆33Updated 3 years ago
- [CVPR 2024] - Official code for the paper "Temporally Consistent Unbalanced Optimal Transport for Unsupervised Action Segmentation"☆37Updated 8 months ago
- Official PyTorch code of GroundVQA (CVPR'24)☆60Updated 7 months ago
- ICCV2023: Disentangling Spatial and Temporal Learning for Efficient Image-to-Video Transfer Learning☆41Updated last year
- (NeurIPS 2024 Spotlight) TOPA: Extend Large Language Models for Video Understanding via Text-Only Pre-Alignment☆30Updated 7 months ago
- Official implementation of "Test-Time Zero-Shot Temporal Action Localization", CVPR 2024☆56Updated 7 months ago
- [AAAI 2025] Grounded Multi-Hop VideoQA in Long-Form Egocentric Videos☆24Updated 3 weeks ago
- [ICCV 2023] How Much Temporal Long-Term Context is Needed for Action Segmentation?☆41Updated 10 months ago
- [ICLR 2023] Temporal Alignment Representations with Contrastive Learning☆26Updated 2 years ago
- Implementation of paper 'Helping Hands: An Object-Aware Ego-Centric Video Recognition Model'☆33Updated last year
- [CVPR'22 Oral] Temporal Alignment Networks for Long-term Video. Tengda Han, Weidi Xie, Andrew Zisserman.☆116Updated last year
- NaQ: Leveraging Narrations as Queries to Supervise Episodic Memory. CVPR 2023.☆16Updated last year
- ☆31Updated 3 years ago
- Code for CVPR 2023 paper "Procedure-Aware Pretraining for Instructional Video Understanding"☆49Updated 3 months ago