fpv-iplab / HOI-SynthLinks
Are Synthetic Data Useful for Egocentric Hand-Object Interaction Detection? [ECCV, 2024]
☆13Updated 8 months ago
Alternatives and similar repositories for HOI-Synth
Users that are interested in HOI-Synth are comparing it to the libraries listed below
Sorting:
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆43Updated last year
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆35Updated this week
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆41Updated 2 years ago
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videos☆70Updated last year
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆52Updated 4 months ago
- Dreamitate: Real-World Visuomotor Policy Learning via Video Generation (CoRL 2024)☆51Updated 3 months ago
- Official code releasse for "The Invisible EgoHand: 3D Hand Forecasting through EgoBody Pose Estimation"☆25Updated 3 weeks ago
- (Incomplete version) This is an implementation of affordancellm.☆14Updated 11 months ago
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"☆29Updated last year
- [ECCV'24] 3D Reconstruction of Objects in Hands without Real World 3D Supervision☆16Updated 7 months ago
- Repository for "General Flow as Foundation Affordance for Scalable Robot Learning"☆62Updated 8 months ago
- ☆18Updated last year
- [CoRL 2025] UniSkill: Imitating Human Videos via Cross-Embodiment Skill Representations☆60Updated 3 weeks ago
- Official repository of "TACO: Benchmarking Generalizable Bimanual Tool-ACtion-Object Understanding".☆55Updated 4 months ago
- ☆11Updated 2 years ago
- ☆80Updated last year
- [ICCV 2023] Understanding 3D Object Interaction from a Single Image☆46Updated last year
- [WIP] Code for LangToMo☆16Updated 2 months ago
- [IROS 2023] Open-Vocabulary Affordance Detection in 3d Point Clouds☆75Updated last year
- Official implementation of "SUGAR: Pre-training 3D Visual Representations for Robotics" (CVPR'24).☆42Updated 2 months ago
- ☆45Updated last year
- Code implementation of the paper 'FIction: 4D Future Interaction Prediction from Video'☆14Updated 5 months ago
- Official implementation of Prompting with the Future: Open-World Model Predictive Control with Interactive Digital Twins. (RSS 2025))☆31Updated last month
- (CVPR 2025) A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning☆17Updated 6 months ago
- Official PyTorch Implementation of Learning Affordance Grounding from Exocentric Images, CVPR 2022☆68Updated 10 months ago
- ☆30Updated last year
- [NeurIPS 2024 D&B] Point Cloud Matters: Rethinking the Impact of Different Observation Spaces on Robot Learning☆86Updated 11 months ago
- [ICCV2025] AnyBimanual: Transfering Unimanual Policy for General Bimanual Manipulation☆88Updated 2 months ago
- ☆36Updated 2 months ago
- code for the paper Predicting Point Tracks from Internet Videos enables Diverse Zero-Shot Manipulation☆94Updated last year