wangxuanx / TalkingStyleLinks
The official pytorch code for TalkingStyle: Personalized Speech-Driven Facial Animation with Style Preservation
☆31Updated last year
Alternatives and similar repositories for TalkingStyle
Users that are interested in TalkingStyle are comparing it to the libraries listed below
Sorting:
- ☆177Updated last year
- DiffSpeaker: Speech-Driven 3D Facial Animation with Diffusion Transformer☆166Updated last year
- ☆100Updated 2 months ago
- This is the repository for EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation☆138Updated last week
- Official code release of "DEEPTalk: Dynamic Emotion Embedding for Probabilistic Speech-Driven 3D Face Animation" [AAAI2025]☆62Updated 11 months ago
- A novel apporach for personalized speech-driven 3D facial animation☆57Updated last year
- This is the official source for our ACM MM 2023 paper "SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking …☆143Updated 2 years ago
- The official pytorch code for Expressive 3D Facial Animation Generation Based on Local-to-global Latent Diffusion☆32Updated last year
- Mapping Mediapipe's 52 blendshapes to FLAME's expression coefficients and poses.☆52Updated 4 months ago
- ☆55Updated 7 months ago
- [CVPR 2022] Code for "Learning Hierarchical Cross-Modal Association for Co-Speech Gesture Generation"☆144Updated 2 years ago
- [CVPR'24] DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation☆191Updated last year
- [CVPR 2024] FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models☆236Updated last year
- ARTalk generates realistic 3D head motions (lip sync, blinking, expressions, head poses) from audio in ⚡ real-time ⚡.☆117Updated 7 months ago
- DiffPoseTalk: Speech-Driven Stylistic 3D Facial Animation and Head Pose Generation via Diffusion Models☆342Updated 10 months ago
- This is the official repository for TalkSHOW: Generating Holistic 3D Human Motion from Speech [CVPR2023].☆366Updated 2 years ago
- DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation with Diffusion Models (IJCAI 2023) | The DiffuseStyleGesture+ ent…☆203Updated 2 months ago
- [CVPR 2024] AMUSE: Emotional Speech-driven 3D Body Animation via Disentangled Latent Diffusion☆133Updated last year
- [ICCV-2023] The official repo for the paper "LivelySpeaker: Towards Semantic-aware Co-Speech Gesture Generation".☆86Updated last year
- This is the codebase for SHOW in Generating Holistic 3D Human Motion from Speech [CVPR2023],☆239Updated last year
- Source code for: Expressive Speech-driven Facial Animation with controllable emotions☆41Updated 2 years ago
- ☆35Updated last week
- This dataset contains 3D reconstructions of the MEAD dataset.☆19Updated 2 years ago
- ☆200Updated last year
- ☆132Updated last year
- ☆48Updated 2 years ago
- [ECCV 2024] Dyadic Interaction Modeling for Social Behavior Generation☆63Updated 9 months ago
- [CVPR'2023] Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation☆259Updated 2 years ago
- ☆218Updated 11 months ago
- This is official inference code of PD-FGC☆99Updated 2 years ago