facebookresearch / htstepLinks
HT-Step is a large-scale article grounding dataset of temporal step annotations on how-to videos
☆18Updated last year
Alternatives and similar repositories for htstep
Users that are interested in htstep are comparing it to the libraries listed below
Sorting:
- Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities (NeurIPS 2023)☆42Updated last year
- Code and data release for the paper "Learning Fine-grained View-Invariant Representations from Unpaired Ego-Exo Videos via Temporal Align…☆17Updated last year
- Official implementation of "HowToCaption: Prompting LLMs to Transform Video Annotations at Scale." ECCV 2024☆53Updated 8 months ago
- This repository contains the Adverbs in Recipes (AIR) dataset and the code published at the CVPR 23 paper: "Learning Action Changes by Me…☆13Updated 2 years ago
- [ECCV 2024] EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval☆38Updated last month
- ☆23Updated last month
- The champion solution for Ego4D Natural Language Queries Challenge in CVPR 2023☆17Updated last year
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆98Updated 11 months ago
- Pytorch implementation for Egoinstructor at CVPR 2024☆22Updated 6 months ago
- Code implementation for our ECCV, 2022 paper titled "My View is the Best View: Procedure Learning from Egocentric Videos"☆28Updated last year
- [AAAI 2025] Grounded Multi-Hop VideoQA in Long-Form Egocentric Videos☆24Updated last week
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)☆33Updated 8 months ago
- Official PyTorch code of GroundVQA (CVPR'24)☆61Updated 8 months ago
- [ICLR 2023] Temporal Alignment Representations with Contrastive Learning☆26Updated 2 years ago
- ☆24Updated last year
- ICCV2023: Disentangling Spatial and Temporal Learning for Efficient Image-to-Video Transfer Learning☆41Updated last year
- Official code for the ICLR2023 paper Compositional Prompt Tuning with Motion Cues for Open-vocabulary Video Relation Detection☆43Updated last year
- Official pytorch repository for "Knowing Where to Focus: Event-aware Transformer for Video Grounding" (ICCV 2023)☆50Updated last year
- The official PyTorch implementation of the IEEE/CVF Computer Vision and Pattern Recognition (CVPR) '24 paper PREGO: online mistake detect…☆23Updated last month
- [NeurIPS 2022 Spotlight] RLIP: Relational Language-Image Pre-training and a series of other methods to solve HOI detection and Scene Grap…☆73Updated last year
- [CVPR23 Highlight] CREPE: Can Vision-Language Foundation Models Reason Compositionally?☆32Updated 2 years ago
- [ICLR 2024 Poster] SCHEMA: State CHangEs MAtter for Procedure Planning in Instructional Videos☆18Updated 6 months ago
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆59Updated 9 months ago
- Repo for paper: "Paxion: Patching Action Knowledge in Video-Language Foundation Models" Neurips 23 Spotlight☆37Updated 2 years ago
- Implementation of paper 'Helping Hands: An Object-Aware Ego-Centric Video Recognition Model'☆33Updated last year
- Can I Trust Your Answer? Visually Grounded Video Question Answering (CVPR'24, Highlight)☆72Updated 11 months ago
- [ECCV 2024 (Oral)] Towards Scene Graph Anticipation☆17Updated 6 months ago
- ☆31Updated 3 years ago
- Action Scene Graphs for Long-Form Understanding of Egocentric Videos (CVPR 2024)☆39Updated last month
- [arXiv:2309.16669] Code release for "Training a Large Video Model on a Single Machine in a Day"☆129Updated 10 months ago