steve-zeyu-zhang / MotionAnythingLinks
🔥 Motion Anything: Any to Motion Generation
☆222Updated 5 months ago
Alternatives and similar repositories for MotionAnything
Users that are interested in MotionAnything are comparing it to the libraries listed below
Sorting:
- The code the CVPR2024 paper Lodge: A Coarse to Fine Diffusion Network for Long Dance Generation Guided by the Characteristic Dance Primit…☆154Updated 7 months ago
- FineDance: A Fine-grained Choreography Dataset for 3D Full Body Dance Generation. (ICCV2023)☆149Updated last year
- A work list of recent human video generation method. This repository focus on half/full body human video generation method, The Nerf, Gau…☆234Updated 11 months ago
- The official implementation of "MeGA: Hybrid Mesh-Gaussian Head Avatar for High-Fidelity Rendering and Head Editing".☆200Updated 4 months ago
- [CVPR2025] AniGS: Animatable Gaussian Avatar from a Single Image with Inconsistent Gaussian Reconstruction☆439Updated 6 months ago
- TL_Control: Trajectory and Language Control for Human Motion Synthesis☆78Updated 7 months ago
- [TIP 2025] From Parts to Whole: A Unified Reference Framework for Controllable Human Image Generation☆195Updated 5 months ago
- Official project page of MTVCrafter, a new paradigm for animating arbitrary characters with 4D motion tokens.☆253Updated 3 weeks ago
- [ICCV 2025] The official implementation of MotionLab☆152Updated 2 months ago
- [NeurIPS 2024] Make-it-Real: Unleashing Large Multimodal Model for Painting 3D Objects with Realistic Materials☆186Updated last year
- [ICCV 2025 ⭐highlight⭐] Implementation of VMem: Consistent Interactive Video Scene Generation with Surfel-Indexed View Memory☆362Updated last month
- ICLR 2025: Generalizable Human Gaussians from Single View Image☆86Updated last week
- OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation☆160Updated last week
- [CVPR'25 Highlight] You See it, You Got it: Learning 3D Creation on Pose-Free Videos at Scale☆685Updated 5 months ago
- [Arxiv 2024] MotionCLR: Motion Generation and Training-free Editing via Understanding Attention Mechanisms☆12Updated 9 months ago
- [ICCV-2025] Official implementation of Bootstrap3D: Improving Multi-view Diffusion Model with Synthetic Data☆91Updated last month
- Large Motion Model for Unified Multi-Modal Motion Generation☆293Updated 8 months ago
- Pytorch Implementation of FLATTEN: optical FLow-guided ATTENtion for consistent text-to-video editing (ICLR 2024)☆206Updated last year
- DreamCinema: Cinematic Transfer with Free Camera and 3D Character☆96Updated 3 months ago
- [ICCV 2023] PyTorch Implementation of "Co-Evolution of Pose and Mesh for 3D Human Body Estimation from Video"☆148Updated last year
- The official implementation of ACM Multimedia 2024 paper "PlacidDreamer: Advancing Harmony in Text-to-3D Generation".☆106Updated last year
- Awesome Controllable Video Generation with Diffusion Models☆57Updated last month
- A system for generating diverse, physically compliant 3D human motions across multiple motion types, guided by plot contexts to streamlin…☆66Updated 7 months ago
- Official repository for paper "MagicMan: Generative Novel View Synthesis of Humans with 3D-Aware Diffusion and Iterative Refinement"☆313Updated last year
- Visualization of DiT self attention features☆218Updated last year
- ☆166Updated last year
- [TPAMI 2025] Official implementation of the paper "DreamWaltz-G: Expressive 3D Gaussian Avatars from Skeleton-Guided 2D Diffusion".☆142Updated 2 months ago
- a family of versatile and state-of-the-art video tokenizers.☆412Updated 2 weeks ago
- [ECCV 2024] Make-Your-3D: Fast and Consistent Subject-Driven 3D Content Generation☆127Updated last year
- [ICLR'25] 3DTrajMaster: Mastering 3D Trajectory for Multi-Entity Motion in Video Generation☆357Updated 2 months ago