lsfhuihuiff / Dance-to-music_Siggraph_Asia_2024Links
The official code for “Dance-to-Music Generation with Encoder-based Textual Inversion“
☆23Updated 7 months ago
Alternatives and similar repositories for Dance-to-music_Siggraph_Asia_2024
Users that are interested in Dance-to-music_Siggraph_Asia_2024 are comparing it to the libraries listed below
Sorting:
- Motion to Dance Music Generation using Latent Diffusion Model☆23Updated 2 years ago
- Data and Pytorch implementation of IEEE TMM "EmotionGesture: Audio-Driven Diverse Emotional Co-Speech 3D Gesture Generation"☆30Updated last year
- FineMotion: A Dataset and Benchmark with both Spatial and Temporal Annotation for Fine-grained Motion Generation and Editing☆17Updated 10 months ago
- [NeurlPS-2024] The official code of MambaTalk: Efficient Holistic Gesture Synthesis with Selective State Space Models☆75Updated 3 weeks ago
- [AAAI 2023 Summer Symposium, Best Paper Award] Taming Diffusion Models for Music-driven Conducting Motion Generation☆26Updated last year
- Towards Variable and Coordinated Holistic Co-Speech Motion Generation, CVPR 2024☆58Updated last year
- [🔥ICCV 2025] SemTalk Holistic Co-speech Motion Generation with Frame-level Semantic Emphasis☆37Updated last month
- Muti-human Interactive Talking Dataset☆67Updated 5 months ago
- DanceCamAnimator: Keyframe-Based Controllable 3D Dance Camera Synthesis. [ACMMM 2024] Official PyTorch implementation☆37Updated last year
- ☆20Updated last year
- The official implementation of work "AToM: Aligning Text-to-Motion Model at Event-Level with GPT-4Vision Reward".☆18Updated 10 months ago
- [CVPR 2024] POPDG: Popular 3D Dance Generation with PopDanceSet☆57Updated 7 months ago
- ☆130Updated last year
- [ICME 2025] DiffusionTalker: Efficient and Compact Speech-Driven 3D Talking Head via Personalizer-Guided Distillation☆24Updated 10 months ago
- ☆13Updated 2 years ago
- Official Implementation of the Paper: Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion Generation (ACMMM 2024)☆73Updated 8 months ago
- [ECCV 2024 Oral] Audio-Synchronized Visual Animation☆57Updated last year
- ☆17Updated 6 months ago
- ☆46Updated 7 months ago
- Official Implement MCM: Multi-condition Motion Synthesis Framework☆20Updated last year
- Official implementation of "MoST: Motion Style Transformer between Diverse Action Contents"☆37Updated last year
- [AAAI2025] Official repo for paper "MotionCraft: Crafting Whole-Body Motion with Plug-and-Play Multimodal Controls"☆123Updated last year
- [CVPR 2024] Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion Latent Aligners☆155Updated last year
- ☆59Updated last year
- ICCV 2025☆61Updated 4 months ago
- MMHead: Towards Fine-grained Multi-modal 3D Facial Animation (ACM MM 2024)☆34Updated 3 months ago
- [CVPR 2025] MG-MotionLLM: A Unified Framework for Motion Comprehension and Generation across Multiple Granularities☆31Updated 9 months ago
- ☆51Updated 2 weeks ago
- ☆21Updated last year
- [AAAI 2025] VQTalker: Towards Multilingual Talking Avatars through Facial Motion Tokenization☆53Updated last year