fpv-iplab / HOI-SynthLinks
Are Synthetic Data Useful for Egocentric Hand-Object Interaction Detection? [ECCV, 2024]
☆13Updated 9 months ago
Alternatives and similar repositories for HOI-Synth
Users that are interested in HOI-Synth are comparing it to the libraries listed below
Sorting:
- One-Shot Open Affordance Learning with Foundation Models (CVPR 2024)☆45Updated last year
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆40Updated last month
- LOCATE: Localize and Transfer Object Parts for Weakly Supervised Affordance Grounding (CVPR 2023)☆44Updated 2 years ago
- Implementation of Prompting with the Future: Open-World Model Predictive Control with Interactive Digital Twins. [RSS 2025]☆41Updated last week
- ☆17Updated 2 weeks ago
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆58Updated 5 months ago
- ☆18Updated last year
- Dreamitate: Real-World Visuomotor Policy Learning via Video Generation (CoRL 2024)☆52Updated 4 months ago
- [CVPR 2022] Joint hand motion and interaction hotspots prediction from egocentric videos☆71Updated last year
- Official repository of "TACO: Benchmarking Generalizable Bimanual Tool-ACtion-Object Understanding".☆56Updated 6 months ago
- [ECCV'24] 3D Reconstruction of Objects in Hands without Real World 3D Supervision☆16Updated 8 months ago
- (Incomplete version) This is an implementation of affordancellm.☆14Updated last year
- ☆37Updated 2 months ago
- CVPR 2025☆35Updated 6 months ago
- ☆30Updated 9 months ago
- [CoRL 2025] UniSkill: Imitating Human Videos via Cross-Embodiment Skill Representations☆67Updated 2 months ago
- ☆81Updated last year
- (CVPR 2025) A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning☆19Updated 7 months ago
- Repository for "General Flow as Foundation Affordance for Scalable Robot Learning"☆63Updated 10 months ago
- Vision-Language-Action Optimization with Trajectory Ensemble Voting☆21Updated last week
- ☆37Updated last year
- [WIP] Code for LangToMo☆20Updated 4 months ago
- IKEA Manuals at Work: 4D Grounding of Assembly Instructions on Internet Videos☆50Updated 6 months ago
- Official code releasse for "The Invisible EgoHand: 3D Hand Forecasting through EgoBody Pose Estimation"☆28Updated 2 months ago
- ☆28Updated last week
- Official implementation of "SUGAR: Pre-training 3D Visual Representations for Robotics" (CVPR'24).☆44Updated 4 months ago
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆74Updated 10 months ago
- [CoRL 2025] Robot Learning from Any Images☆33Updated 2 weeks ago
- MAPLE infuses dexterous manipulation priors from egocentric videos into vision encoders, making their features well-suited for downstream…☆28Updated 6 months ago
- ☆11Updated 2 years ago