facebookresearch / VidOSCLinks
Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)
☆33Updated 9 months ago
Alternatives and similar repositories for VidOSC
Users that are interested in VidOSC are comparing it to the libraries listed below
Sorting:
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"☆27Updated last year
- Code and data release for the paper "Learning Fine-grained View-Invariant Representations from Unpaired Ego-Exo Videos via Temporal Align…☆17Updated last year
- Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)☆44Updated 11 months ago
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆61Updated 9 months ago
- Action Scene Graphs for Long-Form Understanding of Egocentric Videos (CVPR 2024)☆39Updated 2 months ago
- [ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation…☆37Updated 4 months ago
- Python scripts to download Assembly101 from Google Drive☆45Updated 8 months ago
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videos☆66Updated last year
- This is the offical repository of LLAVIDAL☆15Updated 3 months ago
- Code for NeurIPS 2022 Datasets and Benchmarks paper - EgoTaskQA: Understanding Human Tasks in Egocentric Videos.☆33Updated 2 years ago
- For Ego4D VQ3D Task☆20Updated last year
- Code release for "EgoVLPv2: Egocentric Video-Language Pre-training with Fusion in the Backbone" [ICCV, 2023]☆99Updated 11 months ago
- Official implementation of EgoHOD at ICLR 2025; 14 EgoVis Challenge Winners in CVPR 2024☆18Updated 3 months ago
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆57Updated 9 months ago
- Implementation of paper 'Helping Hands: An Object-Aware Ego-Centric Video Recognition Model'☆33Updated last year
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022☆62Updated 7 months ago
- Ego4D Goal-Step: Toward Hierarchical Understanding of Procedural Activities (NeurIPS 2023)☆44Updated last year
- HT-Step is a large-scale article grounding dataset of temporal step annotations on how-to videos☆19Updated last year
- Code for the paper "AMEGO: Active Memory from long EGOcentric videos" published at ECCV 2024☆38Updated 6 months ago
- Video + CLIP Baseline for Ego4D Long Term Action Anticipation Challenge (CVPR 2022)☆15Updated 2 years ago
- Official code implemtation of paper AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?☆21Updated 9 months ago
- Data release for Step Differences in Instructional Video (CVPR24)☆14Updated last year
- Code for the paper: F. Ragusa, G. M. Farinella, A. Furnari. StillFast: An End-to-End Approach for Short-Term Object Interaction Anticipat…☆11Updated 2 years ago
- [WIP] Code for LangToMo☆12Updated last week
- [BMVC2022, IJCV2023, Best Student Paper, Spotlight] Official codes for the paper "In the Eye of Transformer: Global-Local Correlation for…☆27Updated 4 months ago
- Egocentric Video Understanding Dataset (EVUD)☆29Updated 11 months ago
- Bidirectional Mapping between Action Physical-Semantic Space☆31Updated 9 months ago
- team Doggeee's solution to Ego4D LTA challenge@CVPRW23'☆12Updated last year
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆39Updated 2 years ago
- The official PyTorch implementation of the IEEE/CVF Computer Vision and Pattern Recognition (CVPR) '24 paper PREGO: online mistake detect…☆23Updated 2 weeks ago