lsfhuihuiff / Dance-to-music_Siggraph_Asia_2024
The official code for “Dance-to-Music Generation with Encoder-based Textual Inversion“
☆21Updated 3 weeks ago
Alternatives and similar repositories for Dance-to-music_Siggraph_Asia_2024
Users that are interested in Dance-to-music_Siggraph_Asia_2024 are comparing it to the libraries listed below
Sorting:
- Motion to Dance Music Generation using Latent Diffusion Model☆18Updated last year
- Data and Pytorch implementation of IEEE TMM "EmotionGesture: Audio-Driven Diverse Emotional Co-Speech 3D Gesture Generation"☆25Updated last year
- ☆35Updated last year
- [AAAI 2023 Summer Symposium, Best Paper Award] Taming Diffusion Models for Music-driven Conducting Motion Generation☆26Updated last year
- Official source codes for the paper: EmoDubber: Towards High Quality and Emotion Controllable Movie Dubbing.☆16Updated 4 months ago
- A novel apporach for personalized speech-driven 3D facial animation☆50Updated last year
- ☆14Updated last year
- Towards Variable and Coordinated Holistic Co-Speech Motion Generation, CVPR 2024☆55Updated 10 months ago
- ☆16Updated 11 months ago
- ☆29Updated 3 weeks ago
- Official implementation of "MoST: Motion Style Transformer between Diverse Action Contents"☆32Updated 10 months ago
- [AAAI 2025] VQTalker: Towards Multilingual Talking Avatars through Facial Motion Tokenization☆49Updated 5 months ago
- The official pytorch code for TalkingStyle: Personalized Speech-Driven Facial Animation with Style Preservation☆26Updated 10 months ago
- Official Implement MCM: Multi-condition Motion Synthesis Framework☆20Updated 5 months ago
- ☆17Updated 8 months ago
- [NeurlPS-2024] The official code of MambaTalk: Efficient Holistic Gesture Synthesis with Selective State Space Models☆40Updated 2 months ago
- ☆50Updated 7 months ago
- ☆106Updated 10 months ago
- ☆42Updated 2 months ago
- [CVPR 2024] Arbitrary Motion Style Transfer with Multi-condition Motion Latent Diffusion Model☆66Updated 6 months ago
- Freetalker: Controllable Speech and Text-Driven Gesture Generation Based on Diffusion Models for Enhanced Speaker Naturalness (ICASSP 202…☆66Updated last year
- Official Implementation of the Paper: Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion Generation (ACMMM 2024)☆61Updated 3 months ago
- [ECCV 2024] - ScanTalk: 3D Talking Heads from Unregistered Scans☆43Updated last month
- [AAAI2025] Official repo for paper "MotionCraft: Crafting Whole-Body Motion with Plug-and-Play Multimodal Controls"☆86Updated 4 months ago
- FineMotion: A Dataset and Benchmark with both Spatial and Temporal Annotation for Fine-grained Motion Generation and Editing☆13Updated 2 months ago
- ☆11Updated 2 months ago
- [ECCV 2024] Dyadic Interaction Modeling for Social Behavior Generation☆54Updated 3 weeks ago
- UnifiedGesture: A Unified Gesture Synthesis Model for Multiple Skeletons (ACM MM 2023 Oral)☆51Updated last year
- MMHead: Towards Fine-grained Multi-modal 3D Facial Animation (ACM MM 2024)☆28Updated last month
- Code for CVPR 2024 paper: ConvoFusion: Multi-Modal Conversational Diffusion for Co-Speech Gesture Synthesis☆31Updated 3 weeks ago