StanfordVL / momaLinks
A dataset for multi-object multi-actor activity parsing
☆41Updated 2 years ago
Alternatives and similar repositories for moma
Users that are interested in moma are comparing it to the libraries listed below
Sorting:
- ☆128Updated last year
- [ICCV 2023] RLIPv2: Fast Scaling of Relational Language-Image Pre-training☆135Updated last year
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024☆130Updated 5 months ago
- Code for CVPR 2023 paper "Procedure-Aware Pretraining for Instructional Video Understanding"☆50Updated 8 months ago
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆99Updated last year
- [CVPR 2023] Official code for "Learning Procedure-aware Video Representation from Instructional Videos and Their Narrations"☆54Updated 2 years ago
- [NeurIPS 2022] Egocentric Video-Language Pretraining☆248Updated last year
- Action Scene Graphs for Long-Form Understanding of Egocentric Videos (CVPR 2024)☆44Updated 6 months ago
- [CVPR 2022 (oral)] Bongard-HOI for benchmarking few-shot visual reasoning☆72Updated 2 years ago
- [NeurIPS 2022 Spotlight] RLIP: Relational Language-Image Pre-training and a series of other methods to solve HOI detection and Scene Grap…☆78Updated last year
- Code for NeurIPS 2022 Datasets and Benchmarks paper - EgoTaskQA: Understanding Human Tasks in Egocentric Videos.☆35Updated 2 years ago
- Benchmarking Panoptic Video Scene Graph Generation (PVSG), CVPR'23☆97Updated last year
- Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities (NeurIPS 2023)☆46Updated last year
- [CVPR 2022 Oral] TubeDETR: Spatio-Temporal Video Grounding with Transformers☆187Updated 2 years ago
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆70Updated last month
- ☆74Updated last year
- Contrastive Video Question Answering via Video Graph Transformer (IEEE T-PAMI'23)☆19Updated last year
- Code for ECCV2022 Paper "Mining Cross-Person Cues for Body-Part Interactiveness Learning in HOI Detection"☆37Updated 2 years ago
- [CVPR2022] Bridge-Prompt: Towards Ordinal Action Understanding in Instructional Videos☆99Updated 2 years ago
- [CVPR 2022] Visual Abductive Reasoning☆123Updated 11 months ago
- Discovering human interaction with novel objects via zero-shot learning, CVPR, 2020☆42Updated 5 years ago
- Code and Dataset for the CVPRW Paper "Where did I leave my keys? — Episodic-Memory-Based Question Answering on Egocentric Videos"☆28Updated 2 years ago
- [arXiv:2309.16669] Code release for "Training a Large Video Model on a Single Machine in a Day"☆135Updated last month
- Series of work (ECCV2020, CVPR2021, CVPR2021, ECCV2022) about Compositional Learning for Human-Object Interaction Exploration☆80Updated 2 years ago
- Official code implemtation of paper AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?☆24Updated last year
- [NeurIPS 2022] Zero-Shot Video Question Answering via Frozen Bidirectional Language Models☆156Updated 10 months ago
- [NeurIPS 2023 D&B] VidChapters-7M: Video Chapters at Scale☆198Updated last year
- Code for Look for the Change paper published at CVPR 2022☆36Updated 2 years ago
- Home Action Genome: Cooperative Contrastive Action Understanding☆22Updated 3 years ago
- Official implementation of the paper "Boosting Human-Object Interaction Detection with Text-to-Image Diffusion Model"☆64Updated 2 years ago