Official code for the paper "Monkey See, Monkey Do: Harnessing Self-attention in Motion Diffusion for Zero-shot Motion Transfer"
☆56Mar 2, 2025Updated last year
Alternatives and similar repositories for MoMo
Users that are interested in MoMo are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] Arbitrary Motion Style Transfer with Multi-condition Motion Latent Diffusion Model☆81Oct 30, 2024Updated last year
- DNO: Optimizing Diffusion Noise Can Serve As Universal Motion Priors☆165Jan 31, 2026Updated last month
- ☆23Aug 5, 2025Updated 7 months ago
- Official Implementation of "PMP: Learning to Physically Interact with Environments using Part-wise Motion Priors" (SIGGRAPH 2023)☆41Sep 20, 2024Updated last year
- [CVPR 2025] Official Implementation of "MixerMDM: Learnable Composition of Human Motion Diffusion Models".☆26Sep 8, 2025Updated 6 months ago
- ☆106Sep 3, 2025Updated 6 months ago
- [NeurIPS 2024] Official implementation of InterControl☆83Feb 20, 2025Updated last year
- Official implementation of CVPR24 highlight paper "Move as You Say, Interact as You Can: Language-guided Human Motion Generation with Sce…☆175Sep 14, 2024Updated last year
- Code for ReMoS: 3D-Motion Conditioned Reaction Synthesis for Two-person Interactions (ECCV 2024)☆36Mar 4, 2025Updated last year
- (SIGGRAPH 2024) Official repository for "Taming Diffusion Probabilistic Models for Character Control"☆294Sep 25, 2025Updated 5 months ago
- OmniControl: Control Any Joint at Any Time for Human Motion Generation, ICLR 2024☆415Jun 14, 2024Updated last year
- Controllable Group Choreography using Contrastive Diffusion (SIGGRAPH ASIA 2023)☆18Nov 25, 2025Updated 3 months ago
- [ECCV 2024] Official PyTorch implement of paper "ParCo: Part-Coordinating Text-to-Motion Synthesis": http://arxiv.org/abs/2403.18512☆73Sep 30, 2025Updated 5 months ago
- ☆109Oct 29, 2024Updated last year
- [CVPRW 2024] Official Implementation of "in2IN: Leveraging individual Information to Generate Human INteractions".☆57Jul 29, 2024Updated last year
- MotionFix: Text-Driven 3D Human Motion Editing [SIGGRAPH ASIA 2024]☆155Feb 27, 2026Updated 3 weeks ago
- Interactive Character Control with Auto-Regressive Motion Diffusion Models☆194Oct 26, 2024Updated last year
- The official Pytorch implementation of “BAD: Bidirectional Auto-regressive Diffusion for Text-to-Motion Generation”☆52Oct 22, 2024Updated last year
- The official implementation of Flexible Motion In-betweening with Diffusion Models, SIGGRAPH 2024☆252Sep 17, 2024Updated last year
- The official PyTorch implementation of "The 18th European Conference on Computer Vision" (ECCV 2024) paper Length-Aware Motion Synthesis …☆20Dec 15, 2024Updated last year
- ☆55Aug 1, 2024Updated last year
- Official repository for "BAMM: Bidirectional Autoregressive Motion Model (ECCV 2024)"☆56Oct 4, 2025Updated 5 months ago
- Official repository for "MMM: Generative Masked Motion Model" (CVPR 2024 -- Highlight)☆133Jul 5, 2025Updated 8 months ago
- [Arxiv 2024] MotionCLR: Motion Generation and Training-free Editing via Understanding Attention Mechanisms☆14Dec 1, 2024Updated last year
- [CVPR 2024] Generating Human Motion in 3D Scenes from Text Descriptions☆100Nov 2, 2024Updated last year
- ☆12Jul 27, 2024Updated last year
- ☆21Apr 17, 2024Updated last year
- Light-T2M: A Lightweight and Fast Model for Text-to-motion Generation (AAAI 2025)☆42Mar 10, 2025Updated last year
- ☆121Jul 8, 2024Updated last year
- Programmable Motion Generation for Open-Set Motion Control Tasks (CVPR24)☆54Jun 19, 2024Updated last year
- ☆32Feb 15, 2023Updated 3 years ago
- ☆56Jun 10, 2024Updated last year
- [ICCV 2023] Official PyTorch implementation of the paper "InterDiff: Generating 3D Human-Object Interactions with Physics-Informed Diffus…☆285Mar 26, 2025Updated 11 months ago
- [AAAI 2025] Official repo for paper "MotionCraft: Crafting Whole-Body Motion with Plug-and-Play Multimodal Controls"☆125Jan 18, 2025Updated last year
- Official implement of "AMD: Autoregressive Motion Diffusion"☆20Nov 10, 2024Updated last year
- ☆118Jun 2, 2025Updated 9 months ago
- Large Motion Model for Unified Multi-Modal Motion Generation☆308Dec 23, 2024Updated last year
- Official implementation for "MOCHA: Real-Time Motion Characterization via Context Matching" [SIGGRAPH Asia 2023]☆25Jul 21, 2025Updated 8 months ago
- ☆96Apr 1, 2025Updated 11 months ago