facebookresearch / htstep
HT-Step is a large-scale article grounding dataset of temporal step annotations on how-to videos
☆16Updated 5 months ago
Related projects: ⓘ
- Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities (NeurIPS 2023)☆35Updated 5 months ago
- Action Scene Graphs for Long-Form Understanding of Egocentric Videos (CVPR 2024)☆25Updated 2 months ago
- This repository contains the Adverbs in Recipes (AIR) dataset and the code published at the CVPR 23 paper: "Learning Action Changes by Me…☆13Updated last year
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆85Updated 2 months ago
- HowToCaption: Prompting LLMs to Transform Video Annotations at Scale☆31Updated 3 weeks ago
- The champion solution for Ego4D Natural Language Queries Challenge in CVPR 2023☆16Updated 7 months ago
- NaQ: Leveraging Narrations as Queries to Supervise Episodic Memory. CVPR 2023.☆13Updated 7 months ago
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)☆27Updated last week
- ☆19Updated last month
- ☆21Updated 11 months ago
- Official Implementation of SnAG (CVPR 2024)☆32Updated 4 months ago
- Official Pytorch implementation of "Test-Time Zero-Shot Temporal Action Localization", CVPR 2024☆38Updated last week
- ☆61Updated 9 months ago
- Official PyTorch code of "Grounded Question-Answering in Long Egocentric Videos", accepted by CVPR 2024.☆49Updated this week
- [ECCV 2024] EgoCVR: An Egocentric Benchmark for Fine-Grained Composed Video Retrieval☆16Updated 3 weeks ago
- ☆25Updated last year
- [AAAI2023] Symbolic Replay: Scene Graph as Prompt for Continual Learning on VQA Task (Oral)☆36Updated 5 months ago
- Code for CVPR 2023 paper "Procedure-Aware Pretraining for Instructional Video Understanding"☆46Updated last year
- ☆19Updated last year
- Official implementation of CVPR 2024 paper "vid-TLDR: Training Free Token merging for Light-weight Video Transformer".☆32Updated 4 months ago
- Code for paper "VideoTree: Adaptive Tree-based Video Representation for LLM Reasoning on Long Videos"☆68Updated last month
- Official Repo for CVPR 2024 Paper "FACT: Frame-Action Cross-Attention Temporal Modeling for Efficient Fully-Supervised Action Segmentatio…☆27Updated 2 months ago
- [ICCV 2023] How Much Temporal Long-Term Context is Needed for Action Segmentation?☆38Updated 2 months ago
- Official repo for CVPR 2022 (Oral) paper: Revisiting the "Video" in Video-Language Understanding. Contains code for the Atemporal Probe (…☆47Updated 3 months ago
- Official code implemtation of paper AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?☆18Updated 6 months ago
- Implementation of "With a Little Help from my Temporal Context: Multimodal Egocentric Action Recognition, BMVC, 2021" in PyTorch☆19Updated 2 years ago
- [CVPR 2022] Visual Abductive Reasoning☆113Updated 2 years ago
- ☆29Updated 2 months ago
- The official PyTorch implementation of the IEEE/CVF Computer Vision and Pattern Recognition (CVPR) '24 paper PREGO: online mistake detect…☆12Updated 2 months ago
- [CVPR 2024] Do you remember? Dense Video Captioning with Cross-Modal Memory Retrieval☆38Updated 3 months ago