facebookresearch / VidOSC
Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)
☆32Updated 5 months ago
Alternatives and similar repositories for VidOSC:
Users that are interested in VidOSC are comparing it to the libraries listed below
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"☆25Updated 10 months ago
- Action Scene Graphs for Long-Form Understanding of Egocentric Videos (CVPR 2024)☆34Updated this week
- Code and data release for the paper "Learning Fine-grained View-Invariant Representations from Unpaired Ego-Exo Videos via Temporal Align…☆15Updated 10 months ago
- Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)☆43Updated 7 months ago
- [ECCV2024, Oral, Best Paper Finalist]This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation …☆36Updated last week
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆54Updated 6 months ago
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videos☆60Updated last year
- The official PyTorch implementation of the IEEE/CVF Computer Vision and Pattern Recognition (CVPR) '24 paper PREGO: online mistake detect…☆21Updated 3 months ago
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022☆54Updated 4 months ago
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆95Updated 8 months ago
- Code for NeurIPS 2022 Datasets and Benchmarks paper - EgoTaskQA: Understanding Human Tasks in Egocentric Videos.☆30Updated last year
- Implementation of paper 'Helping Hands: An Object-Aware Ego-Centric Video Recognition Model'☆33Updated last year
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆59Updated 5 months ago
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆35Updated last year
- HT-Step is a large-scale article grounding dataset of temporal step annotations on how-to videos☆17Updated 11 months ago
- Bidirectional Mapping between Action Physical-Semantic Space☆30Updated 6 months ago
- Python scripts to download Assembly101 from Google Drive☆38Updated 4 months ago
- team Doggeee's solution to Ego4D LTA challenge@CVPRW23'☆12Updated last year
- Official implementation of "A Backpack Full of Skills: Egocentric Video Understanding with Diverse Task Perspectives", accepted at CVPR 2…☆17Updated 8 months ago
- [NeurIPS 2023] OV-PARTS: Towards Open-Vocabulary Part Segmentation☆78Updated 8 months ago
- ☆24Updated last year
- ☆110Updated 9 months ago
- Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities (NeurIPS 2023)☆39Updated 10 months ago
- ☆19Updated last year
- [ECCV 2024 Oral] ActionVOS: Actions as Prompts for Video Object Segmentation☆31Updated 3 months ago
- ☆25Updated last year
- Egocentric Video Understanding Dataset (EVUD)☆26Updated 7 months ago
- This is the official impletations of the EMNLP Findings paper, VideoINSTA: Zero-shot Long-Form Video Understanding via Informative Spatia…☆17Updated 3 months ago
- Official code implemtation of paper AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?☆20Updated 5 months ago
- Code for the VOST dataset☆24Updated last year