Sid2697 / HOI-Ref
Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"
☆26Updated last year
Alternatives and similar repositories for HOI-Ref:
Users that are interested in HOI-Ref are comparing it to the libraries listed below
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)☆32Updated 7 months ago
- Action Scene Graphs for Long-Form Understanding of Egocentric Videos (CVPR 2024)☆38Updated last week
- Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)☆43Updated 8 months ago
- Code and data release for the paper "Learning Fine-grained View-Invariant Representations from Unpaired Ego-Exo Videos via Temporal Align…☆17Updated last year
- [ECCV2024, Oral, Best Paper Finalist]This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation …☆37Updated last month
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videos☆62Updated last year
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆37Updated last year
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆58Updated 6 months ago
- [CVPR 2024] Data and benchmark code for the EgoExoLearn dataset☆56Updated 7 months ago
- ☆40Updated 3 weeks ago
- ☆25Updated 2 years ago
- FleVRS: Towards Flexible Visual Relationship Segmentation, NeurIPS 2024☆20Updated 4 months ago
- A repo for processing the raw hand object detections to produce releasable pickles + library for using these☆37Updated 5 months ago
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022☆55Updated 5 months ago
- Bidirectional Mapping between Action Physical-Semantic Space☆31Updated 7 months ago
- [CVPR 2023] Detecting Human-Object Contact in Images☆54Updated last year
- Implementation of paper 'Helping Hands: An Object-Aware Ego-Centric Video Recognition Model'☆33Updated last year
- Code for NeurIPS 2022 Datasets and Benchmarks paper - EgoTaskQA: Understanding Human Tasks in Egocentric Videos.☆32Updated 2 years ago
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆71Updated 2 months ago
- [TCSVT 2024] Temporally Consistent Referring Video Object Segmentation with Hybrid Memory☆16Updated last week
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆102Updated 5 months ago
- Official code implemtation of paper AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?☆21Updated 6 months ago
- [ECCV'24] 3D Reconstruction of Objects in Hands without Real World 3D Supervision☆11Updated 2 months ago
- team Doggeee's solution to Ego4D LTA challenge@CVPRW23'☆12Updated last year
- ☆36Updated 11 months ago
- ☆25Updated last year
- [ECCV 2024 Oral] ActionVOS: Actions as Prompts for Video Object Segmentation☆31Updated 4 months ago
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆28Updated 3 months ago
- Data release for Step Differences in Instructional Video (CVPR24)☆13Updated 10 months ago
- Code for the paper "GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos" published at CVPR 2024☆51Updated last year