StanfordVL / moma
A dataset for multi-object multi-actor activity parsing
☆37Updated last year
Alternatives and similar repositories for moma:
Users that are interested in moma are comparing it to the libraries listed below
- Code for CVPR 2023 paper "Procedure-Aware Pretraining for Instructional Video Understanding"☆49Updated 2 months ago
- Code and Dataset for the CVPRW Paper "Where did I leave my keys? — Episodic-Memory-Based Question Answering on Egocentric Videos"☆25Updated last year
- [NeurIPS 2022 Spotlight] RLIP: Relational Language-Image Pre-training and a series of other methods to solve HOI detection and Scene Grap…☆73Updated 10 months ago
- [CVPR 2022 (oral)] Bongard-HOI for benchmarking few-shot visual reasoning☆66Updated 2 years ago
- ☆116Updated 10 months ago
- Code for NeurIPS 2022 Datasets and Benchmarks paper - EgoTaskQA: Understanding Human Tasks in Egocentric Videos.☆32Updated 2 years ago
- [arXiv:2309.16669] Code release for "Training a Large Video Model on a Single Machine in a Day"☆127Updated 8 months ago
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆97Updated 9 months ago
- [CVPR 2022] Visual Abductive Reasoning☆122Updated 5 months ago
- [ICCV 2021] Official code for "Learning to Generate Scene Graph from Natural Language Supervision"☆100Updated 2 years ago
- Official implementation for "A Simple LLM Framework for Long-Range Video Question-Answering"☆94Updated 5 months ago
- [CVPR 2023] Official code for "Learning Procedure-aware Video Representation from Instructional Videos and Their Narrations"☆52Updated last year
- Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities (NeurIPS 2023)☆40Updated last year
- [NeurIPS 2022] Egocentric Video-Language Pretraining☆238Updated 11 months ago
- [ICCV 2021] Target Adaptive Context Aggregation for Video Scene Graph Generation☆58Updated 2 years ago
- Official repository for "IntentQA: Context-aware Video Intent Reasoning" from ICCV 2023.☆16Updated 4 months ago
- Action Scene Graphs for Long-Form Understanding of Egocentric Videos (CVPR 2024)☆38Updated last week
- Home Action Genome: Cooperative Contrastive Action Understanding☆20Updated 3 years ago
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)☆32Updated 7 months ago
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆56Updated 7 months ago
- ☆69Updated last year
- ☆84Updated last year
- ☆32Updated 11 months ago
- Code for ECCV2022 Paper "Mining Cross-Person Cues for Body-Part Interactiveness Learning in HOI Detection"☆36Updated 2 years ago
- This repository provides the dataset introduced by the paper "Where Does It Exist: Spatio-Temporal Video Grounding for Multi-Form Sentenc…☆63Updated 4 years ago
- Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)☆43Updated 8 months ago
- Official repo for CVPR 2022 (Oral) paper: Revisiting the "Video" in Video-Language Understanding. Contains code for the Atemporal Probe (…☆50Updated 10 months ago
- Repo for paper: "Paxion: Patching Action Knowledge in Video-Language Foundation Models" Neurips 23 Spotlight☆37Updated last year
- Official code implemtation of paper AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?☆21Updated 6 months ago
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024☆127Updated last month