nguyennm1024 / OSCaRLinks
π₯π₯π₯ Object State Description & Change Detection
β10Updated last year
Alternatives and similar repositories for OSCaR
Users that are interested in OSCaR are comparing it to the libraries listed below
Sorting:
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)β35Updated last year
- Code and data release for the paper "Learning Fine-grained View-Invariant Representations from Unpaired Ego-Exo Videos via Temporal Alignβ¦β18Updated last year
- ChangeIt dataset with more than 2600 hours of video with state-changing actions published at CVPR 2022β11Updated 3 years ago
- Code implementation for our ECCV, 2022 paper titled "My View is the Best View: Procedure Learning from Egocentric Videos"β30Updated last year
- Official code implemtation of paper AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?β24Updated last year
- A repo for processing the raw hand object detections to produce releasable pickles + library for using theseβ38Updated last year
- Code for the paper "Multi-Task Learning of Object States and State-Modifying Actions from Web Videos" published in TPAMIβ11Updated last year
- Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)β44Updated last year
- Code for Look for the Change paper published at CVPR 2022β36Updated 3 years ago
- Python scripts to download Assembly101 from Google Driveβ52Updated last year
- Progress-Aware Online Action Segmentation for Egocentric Procedural Task Videosβ27Updated last year
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022β67Updated 11 months ago
- Visual Representation Learning with Stochastic Frame Prediction (ICML 2024)β23Updated 11 months ago
- Code and models of MOCA (Modular Object-Centric Approach) proposed in "Factorizing Perception and Policy for Interactive Instruction Follβ¦β38Updated last year
- Official codebase for EmbCLIPβ132Updated 2 years ago
- This is the pytorch version of tcc loss, used in paper 'Temporal Cycle-Consistency Learning'.β26Updated 5 years ago
- Code for the paper: F. Ragusa, G. M. Farinella, A. Furnari. StillFast: An End-to-End Approach for Short-Term Object Interaction Anticipatβ¦β11Updated 2 years ago
- β74Updated last year
- The official implementation of Error Detection in Egocentric Procedural Task Videosβ19Updated last month
- Video narrator written in Python/GTK using vlc-libβ25Updated 3 years ago
- [ICLR 2024 Poster] SCHEMA: State CHangEs MAtter for Procedure Planning in Instructional Videosβ19Updated 2 months ago
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"β29Updated last year
- This repository is the official implementation of Improving Object-centric Learning With Query Optimizationβ51Updated 2 years ago
- [CVPR 2022 (oral)] Bongard-HOI for benchmarking few-shot visual reasoningβ72Updated 2 years ago
- HT-Step is a large-scale article grounding dataset of temporal step annotations on how-to videosβ22Updated last year
- Annotations for the public release of the EPIC-KITCHENS-100 datasetβ158Updated 3 years ago
- β11Updated 2 years ago
- Code for NeurIPS 2022 Datasets and Benchmarks paper - EgoTaskQA: Understanding Human Tasks in Egocentric Videos.β35Updated 2 years ago
- β13Updated 2 years ago
- Self-supervised algorithm for learning representations from ego-centric video data. Code is tested on EPIC-Kitchens-100 and Ego4D in PyToβ¦β12Updated 3 years ago