nguyennm1024 / OSCaRLinks
π₯π₯π₯ Object State Description & Change Detection
β10Updated last year
Alternatives and similar repositories for OSCaR
Users that are interested in OSCaR are comparing it to the libraries listed below
Sorting:
- Code and data release for the paper "Learning Fine-grained View-Invariant Representations from Unpaired Ego-Exo Videos via Temporal Alignβ¦β18Updated last year
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)β33Updated 11 months ago
- Code implementation for our ECCV, 2022 paper titled "My View is the Best View: Procedure Learning from Egocentric Videos"β29Updated last year
- Visual Representation Learning with Stochastic Frame Prediction (ICML 2024)β22Updated 9 months ago
- Official code implemtation of paper AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?β23Updated 11 months ago
- Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)β44Updated last year
- HT-Step is a large-scale article grounding dataset of temporal step annotations on how-to videosβ20Updated last year
- This repository is the official implementation of Improving Object-centric Learning With Query Optimizationβ51Updated 2 years ago
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022β67Updated 10 months ago
- Code for the paper: F. Ragusa, G. M. Farinella, A. Furnari. StillFast: An End-to-End Approach for Short-Term Object Interaction Anticipatβ¦β11Updated 2 years ago
- Code for Look for the Change paper published at CVPR 2022β36Updated 2 years ago
- ChangeIt dataset with more than 2600 hours of video with state-changing actions published at CVPR 2022β11Updated 3 years ago
- β71Updated last year
- [CVPR 2024 Champions][ICLR 2025] Solutions for EgoVis Chanllenges in CVPR 2024β129Updated 3 months ago
- β80Updated 3 weeks ago
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"β29Updated last year
- Code for the paper "Multi-Task Learning of Object States and State-Modifying Actions from Web Videos" published in TPAMIβ11Updated last year
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasksβ58Updated 11 months ago
- The official implementation of Error Detection in Egocentric Procedural Task Videosβ16Updated last year
- [CVPR 2024] Data and benchmark code for the EgoExoLearn datasetβ68Updated last week
- Simple PyTorch Dataset for the EPIC-Kitchens-55 and EPIC-Kitchens-100 that handles frames and features (rgb, optical flow, and objects) fβ¦β24Updated 2 years ago
- Code release for ICLR 2023 paper: SlotFormer on object-centric dynamics modelsβ111Updated last year
- [WIP] Code for LangToMoβ16Updated 2 months ago
- Progress-Aware Online Action Segmentation for Egocentric Procedural Task Videosβ25Updated 11 months ago
- Self-supervised algorithm for learning representations from ego-centric video data. Code is tested on EPIC-Kitchens-100 and Ego4D in PyToβ¦β12Updated 2 years ago
- [NeurIPS 2022] Egocentric Video-Language Pretrainingβ244Updated last year
- This is the pytorch version of tcc loss, used in paper 'Temporal Cycle-Consistency Learning'.β26Updated 4 years ago
- A repo for processing the raw hand object detections to produce releasable pickles + library for using theseβ37Updated 10 months ago
- [NeurIPS 2023] Self-supervised Object-Centric Learning for Videosβ29Updated 9 months ago
- [ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generationβ¦β37Updated 6 months ago