InternRobotics / EgoHODLinks
Official implementation of EgoHOD at ICLR 2025; 14 EgoVis Challenge Winners in CVPR 2024
☆26Updated this week
Alternatives and similar repositories for EgoHOD
Users that are interested in EgoHOD are comparing it to the libraries listed below
Sorting:
- ☆30Updated 6 months ago
- (ECCV 2024) Official repository of paper "EgoExo-Fitness: Towards Egocentric and Exocentric Full-Body Action Understanding"☆31Updated 7 months ago
- ☆51Updated 7 months ago
- Official code releasse for "The Invisible EgoHand: 3D Hand Forecasting through EgoBody Pose Estimation"☆29Updated 3 months ago
- [Nips 2025] EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆122Updated 4 months ago
- A curated list of Egocentric Action Understanding resources☆35Updated 3 months ago
- [ICLR 2024] Seer: Language Instructed Video Prediction with Latent Diffusion Models☆33Updated last year
- Official code for MotionBench (CVPR 2025)☆59Updated 8 months ago
- [CVPR 2024] Narrative Action Evaluation with Prompt-Guided Multimodal Interaction☆39Updated last year
- Code implementation of the paper 'FIction: 4D Future Interaction Prediction from Video'☆16Updated 8 months ago
- ☆100Updated 3 weeks ago
- Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"☆29Updated last year
- [ECCV2024, Oral, Best Paper Finalist] This is the official implementation of the paper "LEGO: Learning EGOcentric Action Frame Generation…☆39Updated 9 months ago
- Affordance Grounding from Demonstration Video to Target Image (CVPR 2023)☆44Updated last year
- Bidirectional Mapping between Action Physical-Semantic Space☆32Updated 2 months ago
- ☆21Updated last year
- [CVPR 2025] LION-FS: Fast & Slow Video-Language Thinker as Online Video Assistant☆21Updated 5 months ago
- Vinci: A Real-time Embodied Smart Assistant based on Egocentric Vision-Language Model☆77Updated 10 months ago
- https://coshand.cs.columbia.edu/☆17Updated last year
- Official implementation of CVPR24 highlight paper "Move as You Say, Interact as You Can: Language-guided Human Motion Generation with Sce…☆166Updated last year
- [NeurIPS 2024] Official code repository for MSR3D paper☆69Updated 4 months ago
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆88Updated 5 months ago
- ☆26Updated last year
- Code and data release for the paper "Learning Object State Changes in Videos: An Open-World Perspective" (CVPR 2024)☆35Updated last year
- Codes for "Affordance Diffusion: Synthesizing Hand-Object Interactions"