HiWilliamWWL / Learn-to-Predict-How-Humans-Manipulate-Large-Sized-Objects-From-Interactive-Motions-objectsLinks
This is the repo for the paper "Learn to Predict How Humans Manipulate Large-Sized Objects From Interactive Motions"
☆22Updated last year
Alternatives and similar repositories for Learn-to-Predict-How-Humans-Manipulate-Large-Sized-Objects-From-Interactive-Motions-objects
Users that are interested in Learn-to-Predict-How-Humans-Manipulate-Large-Sized-Objects-From-Interactive-Motions-objects are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] InterFusion: Text-Driven Generation of 3D Human-Object Interaction☆54Updated 10 months ago
- NeuroGF: A Neural Representation for Fast Geodesic Distance and Path Queries☆48Updated this week
- Diffusion-Driven Self-Supervised Network for Multi-Object 3D Shape Reconstruction and Categorical 6-DoF Pose Estimation☆27Updated last year
- Disentangled Implicit Content and Rhythm Learning for Diverse Co-Speech Gestures Synthesis [ACMMM 2022]☆26Updated 4 months ago
- ☆47Updated last year
- ☆62Updated 2 years ago
- Official implementation of "Generating images with 3D annotations using diffusion models".☆46Updated last year
- TL_Control: Trajectory and Language Control for Human Motion Synthesis