steve-zeyu-zhang / MotionAnythingLinks
π₯ Motion Anything: Any to Motion Generation
β225Updated this week
Alternatives and similar repositories for MotionAnything
Users that are interested in MotionAnything are comparing it to the libraries listed below
Sorting:
- The code the CVPR2024 paper Lodge: A Coarse to Fine Diffusion Network for Long Dance Generation Guided by the Characteristic Dance Primitβ¦β157Updated 8 months ago
- FineDance: A Fine-grained Choreography Dataset for 3D Full Body Dance Generation. (ICCV2023)β151Updated last year
- The official implementation of "MeGA: Hybrid Mesh-Gaussian Head Avatar for High-Fidelity Rendering and Head Editing".β202Updated 5 months ago
- [CVPR2025] AniGS: Animatable Gaussian Avatar from a Single Image with Inconsistent Gaussian Reconstructionβ447Updated 7 months ago
- A work list of recent human video generation method. This repository focus on half/full body human video generation method, The Nerf, Gauβ¦β239Updated last year
- [ICCV 2025] The official implementation of MotionLabβ165Updated last month
- [Arxiv 2024] MotionCLR: Motion Generation and Training-free Editing via Understanding Attention Mechanismsβ13Updated 10 months ago
- TL_Control: Trajectory and Language Control for Human Motion Synthesisβ78Updated 8 months ago
- Official project page of MTVCrafter, a new paradigm for animating arbitrary characters with 4D motion tokens.β264Updated 2 months ago
- [TIP 2025] From Parts to Whole: A Unified Reference Framework for Controllable Human Image Generationβ195Updated last month
- [NeurIPS 2024] Make-it-Real: Unleashing Large Multimodal Model for Painting 3D Objects with Realistic Materialsβ185Updated last year
- ICLR 2025: Generalizable Human Gaussians from Single View Imageβ87Updated last month
- Large Motion Model for Unified Multi-Modal Motion Generationβ298Updated 10 months ago
- [ECCV2024] DreamScene: 3D Gaussian-based Text-to-3D Scene Generation via Formation Pattern Samplingβ220Updated 3 months ago
- [TPAMI 2025] Official implementation of the paper "DreamWaltz-G: Expressive 3D Gaussian Avatars from Skeleton-Guided 2D Diffusion".β148Updated 3 months ago
- [ICCV 2025] MotionStreamer: Streaming Motion Generation via Diffusion-based Autoregressive Model in Causal Latent Spaceβ190Updated last week
- β69Updated 4 months ago
- [T-PAMI] Official code for "SMPLest-X: Ultimate Scaling for Expressive Human Pose and Shape Estimation"β208Updated last week
- A system for generating diverse, physically compliant 3D human motions across multiple motion types, guided by plot contexts to streamlinβ¦β69Updated 8 months ago
- β243Updated 2 months ago
- OmniControl: Control Any Joint at Any Time for Human Motion Generation, ICLR 2024β368Updated last year
- [ICCV 2023] PyTorch Implementation of "Co-Evolution of Pose and Mesh for 3D Human Body Estimation from Video"β148Updated last year
- Official repository for paper "MagicMan: Generative Novel View Synthesis of Humans with 3D-Aware Diffusion and Iterative Refinement"β312Updated last year
- [AAAI2025] Official repo for paper "MotionCraft: Crafting Whole-Body Motion with Plug-and-Play Multimodal Controls"β111Updated 9 months ago
- Code for ICLR 2024 paper "Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment"β104Updated last year
- β91Updated 6 months ago
- β84Updated 6 months ago
- [CVPR 2024] Official Code for "AiOS: All-in-One-Stage Expressive Human Pose and Shape Estimationβ323Updated 6 months ago
- Official repository for "MaskControl: Spatio-Temporal Control for Masked Motion Synthesis" ICCV 2025 (Oral)β125Updated 2 weeks ago
- [NeurIPS 2025 D&Bπ₯] OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generationβ164Updated last week