leohku / faceformer-emoLinks
FaceFormer Emo: Speech-Driven 3D Facial Animation with Emotion Embedding
☆26Updated last year
Alternatives and similar repositories for faceformer-emo
Users that are interested in faceformer-emo are comparing it to the libraries listed below
Sorting:
- A novel apporach for personalized speech-driven 3D facial animation☆51Updated last year
- ☆32Updated last year
- ☆16Updated 9 months ago
- Speech-Driven Expression Blendshape Based on Single-Layer Self-attention Network (AIWIN 2022)☆76Updated 2 years ago
- ☆42Updated last week
- ☆23Updated last year
- SyncTalkFace: Talking Face Generation for Precise Lip-syncing via Audio-Lip Memory☆33Updated 2 years ago
- This is the repository for EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation☆126Updated last year
- ☆101Updated last year
- NeurIPS 2022☆38Updated 2 years ago
- ☆27Updated 2 months ago
- [AAAI 2024] stle2talker - Official PyTorch Implementation☆43Updated last year
- Code for "SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking Faces" ACM MM 2023☆30Updated last year
- ☆73Updated 2 years ago
- DiffSpeaker: Speech-Driven 3D Facial Animation with Diffusion Transformer☆159Updated last year
- This is official inference code of PD-FGC☆87Updated last year
- ☆84Updated 4 months ago
- [CVPR 2023] High-Fidelity and Freely Controllable Talking Head Video Generation☆3Updated 4 months ago
- [ECCV2024 offical]KMTalk: Speech-Driven 3D Facial Animation with Key Motion Embedding☆32Updated 10 months ago
- [INTERSPEECH'24] Official repository for "MultiTalk: Enhancing 3D Talking Head Generation Across Languages with Multilingual Video Datase…☆103Updated 7 months ago
- Something about Talking Head Generation☆32Updated last year
- Source code for: Expressive Speech-driven Facial Animation with controllable emotions☆38Updated last year
- KAN-based Fusion of Dual Domain for Audio-Driven Landmarks Generation of the model can help you generate an sequence of facial lanmarks f…☆28Updated 3 months ago
- Drive your metahuman to speak within 1 second.☆6Updated 2 months ago
- QPGesture: Quantization-Based and Phase-Guided Motion Matching for Natural Speech-Driven Gesture Generation (CVPR 2023 Highlight)☆90Updated last year
- ☆45Updated last year
- This is a pytorch implementation of the following paper: AniFaceGAN: Animatable 3D-Aware Face Image Generation for Video Avatars, NeurIP…☆73Updated last year
- [ICIAP 2023] Learning Landmarks Motion from Speech for Speaker-Agnostic 3D Talking Heads Generation☆62Updated last year
- Official implentation of SingingHead: A Large-scale 4D Dataset for Singing Head Animation. (TMM 25)☆57Updated 2 months ago
- ☆162Updated last year