Official pytorch implementation of VersatileMotion based on MMEngine
☆68Dec 20, 2025Updated 2 months ago
Alternatives and similar repositories for VersatileMotion
Users that are interested in VersatileMotion are comparing it to the libraries listed below
Sorting:
- ☆94Apr 1, 2025Updated 10 months ago
- Official implementation of "PersonaBooth: Personalized Text-to-Motion Generation (CVPR 2025)"☆33Sep 27, 2025Updated 5 months ago
- ☆49May 20, 2024Updated last year
- M3GPT: An advanced multimodal, multitask framework for motion comprehension and generation.☆19Dec 12, 2024Updated last year
- Official implementation for "Exploring Vision Transformers for 3D Human Motion-Language Models with Motion Patches" (CVPR 2024)☆30Jul 4, 2024Updated last year
- [CVPR 2025] MG-MotionLLM: A Unified Framework for Motion Comprehension and Generation across Multiple Granularities☆32Apr 6, 2025Updated 10 months ago
- ECCV 2024: Controllable Motion Generation through Language Guided Pose Code Editing☆50Dec 20, 2024Updated last year
- [CVPR 2025] Official implementation of the paper "SimMotionEdit: Text-Based Human Motion Editing with Motion Similarity Prediction"☆47Dec 11, 2025Updated 2 months ago
- MotionFix: Text-Driven 3D Human Motion Editing [SIGGRAPH ASIA 2024]☆155Nov 5, 2025Updated 3 months ago
- CoMA: Compositional Human Motion Generation with Multi-modal Agents☆14Jul 31, 2025Updated 7 months ago
- Pytorch implementation of Unimotion: Unifying 3D Human Motion Synthesis and Understanding.☆93Apr 13, 2025Updated 10 months ago
- [CVPR 2025] HumanMM: Global Human Motion Recovery from Multi-shot Videos☆118Mar 20, 2025Updated 11 months ago
- Official code release of "DisCoRD: Discrete Tokens to Continuous Motion via Rectified Flow Decoding" [ICCV2025 Highlight]☆50Sep 27, 2025Updated 5 months ago
- Official Implement MCM: Multi-condition Motion Synthesis Framework☆20Nov 26, 2024Updated last year
- [AAAI 2025] Official repo for paper "MotionCraft: Crafting Whole-Body Motion with Plug-and-Play Multimodal Controls"☆124Jan 18, 2025Updated last year
- The modified 272-dimensional motion representation processing script.☆217Oct 24, 2025Updated 4 months ago
- [ICCV 2025] The official implementation of MotionLab☆187Dec 3, 2025Updated 2 months ago
- Official implement of "AMD: Autoregressive Motion Diffusion"☆20Nov 10, 2024Updated last year
- Humos paper repository☆27Sep 6, 2025Updated 5 months ago
- ☆21Apr 17, 2024Updated last year
- Official repository for "BAMM: Bidirectional Autoregressive Motion Model (ECCV 2024)"☆56Oct 4, 2025Updated 4 months ago
- Controllable Group Choreography using Contrastive Diffusion☆18Nov 25, 2025Updated 3 months ago
- Official implementation of "FreeMotion: A Unified Framework for Number-free Text-to-Motion Synthesis"☆44Oct 15, 2024Updated last year
- The official Pytorch implementation of “BAD: Bidirectional Auto-regressive Diffusion for Text-to-Motion Generation”☆52Oct 22, 2024Updated last year
- A framework for text-based retrieval augmented motion generation☆24Feb 18, 2025Updated last year
- Code release for the paper "DartControl: A Diffusion-Based Autoregressive Motion Model for Real-Time Text-Driven Motion Control"☆212Jun 14, 2025Updated 8 months ago
- [ICLR 2025] Pytorch Implementation of "Aligning Motion Generation with Human Perceptions".☆89Apr 27, 2025Updated 10 months ago
- Light-T2M: A Lightweight and Fast Model for Text-to-motion Generation (AAAI 2025)☆40Mar 10, 2025Updated 11 months ago
- [ICCV 2023] TM2D: Bimodality Driven 3D Dance Generation via Music-Text Integration☆102Mar 4, 2024Updated last year
- [AAAI 2025] The official repository of UniMuMo☆129Sep 14, 2025Updated 5 months ago
- ☆16May 1, 2025Updated 9 months ago
- [CVPR 2024] POPDG: Popular 3D Dance Generation with PopDanceSet☆58Jun 18, 2025Updated 8 months ago
- ☆112Jun 2, 2025Updated 8 months ago
- The official implementation of work "AToM: Aligning Text-to-Motion Model at Event-Level with GPT-4Vision Reward".☆18Mar 25, 2025Updated 11 months ago
- We provide a way to fuse MANO parameters into SMPLX.☆52Mar 19, 2025Updated 11 months ago
- [ICLR 2025] Ready-to-React: Online Reaction Policy for Two-Character Interaction Generation