sato-team / Stable-Text-to-Motion-FrameworkLinks
SATO: Stable Text-to-Motion Framework
☆118Updated last year
Alternatives and similar repositories for Stable-Text-to-Motion-Framework
Users that are interested in Stable-Text-to-Motion-Framework are comparing it to the libraries listed below
Sorting:
- Large Motion Model for Unified Multi-Modal Motion Generation☆300Updated last year
- ☆37Updated last year
- [NeurIPS 2023] InsActor: Instruction-driven Physics-based Characters☆135Updated last year
- [AAAI 2025] The official repository of UniMuMo☆127Updated 4 months ago
- [Arxiv 2024] MotionCLR: Motion Generation and Training-free Editing via Understanding Attention Mechanisms☆14Updated last year
- [NeurIPS 2023] FineMoGen: Fine-Grained Spatio-Temporal Motion Generation and Editing☆133Updated 2 years ago
- Code for ICLR 2024 paper "Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment"☆112Updated last year
- ReMoDiffuse: Retrieval-Augmented Motion Diffusion Model☆371Updated last year
- [ICML 2024] 🍅HumanTOMATO: Text-aligned Whole-body Motion Generation☆362Updated last year
- Official pytorch implementation of Action-GPT☆118Updated 2 years ago
- [CVPRW 2024] Official Implementation of "in2IN: Leveraging individual Information to Generate Human INteractions".☆57Updated last year
- ☆66Updated 5 months ago
- Official implementation of "Self-Correcting Self-Consuming Loops for Generative Model Training" (ICML 2024)☆34Updated last year
- Pytorch implementation of MIMO, Controllable Character Video Synthesis with Spatial Decomposed Modeling, from Alibaba Intelligence Group☆136Updated last year
- the Quest for Generalizable Motion Generation: Data, Model, and Evaluation☆70Updated last month
- [Arxiv-2024] MotionLLM: Understanding Human Behaviors from Human Motions and Videos