fpv-iplab / HOI-SynthLinks
Are Synthetic Data Useful for Egocentric Hand-Object Interaction Detection? [ECCV, 2024]
☆13Updated last week
Alternatives and similar repositories for HOI-Synth
Users that are interested in HOI-Synth are comparing it to the libraries listed below
Sorting:
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆41Updated 4 months ago
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆45Updated last year
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆46Updated 2 years ago
- Official code releasse for "The Invisible EgoHand: 3D Hand Forecasting through EgoBody Pose Estimation"☆29Updated 5 months ago
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videos☆71Updated last year
- [ECCV'24] 3D Reconstruction of Objects in Hands without Real World 3D Supervision☆16Updated 11 months ago
- ☆70Updated 4 months ago
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆59Updated 8 months ago
- Dreamitate: Real-World Visuomotor Policy Learning via Video Generation (CoRL 2024)☆58Updated 7 months ago
- Official repository of "TACO: Benchmarking Generalizable Bimanual Tool-ACtion-Object Understanding".☆61Updated last month
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"☆29Updated last year
- [WIP] Code for LangToMo☆20Updated 6 months ago
- Code & data for "RoboGround: Robotic Manipulation with Grounded Vision-Language Priors" (CVPR 2025)