YoungSeng / Speech-driven-expressionsLinks
Speech-Driven Expression Blendshape Based on Single-Layer Self-attention Network (AIWIN 2022)
☆78Updated 3 years ago
Alternatives and similar repositories for Speech-driven-expressions
Users that are interested in Speech-driven-expressions are comparing it to the libraries listed below
Sorting:
- This is the official source for our ACM MM 2023 paper "SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking …☆140Updated last year
- PyTorch implementation of "Towards Accurate Facial Motion Retargeting with Identity-Consistent and Expression-Exclusive Constraints" (AAA…☆98Updated 3 years ago
- This is the repository for EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation☆131Updated 4 months ago
- ☆173Updated last year
- Audio2Motion Official implementation for Audio2Motion: Generating Diverse Gestures from Speech with Conditional Variational Autoencoders.☆143Updated last year
- ☆71Updated 2 years ago
- SAiD: Blendshape-based Audio-Driven Speech Animation with Diffusion☆122Updated last year
- DiffSpeaker: Speech-Driven 3D Facial Animation with Diffusion Transformer☆162Updated last year
- ☆102Updated 2 weeks ago
- ☆95Updated 4 years ago
- ☆47Updated 2 years ago
- [CVPR 2022] Code for "Learning Hierarchical Cross-Modal Association for Co-Speech Gesture Generation"☆143Updated 2 years ago
- This is official inference code of PD-FGC☆97Updated 2 years ago
- Official Pytorch Implementation of SPECTRE: Visual Speech-Aware Perceptual 3D Facial Expression Reconstruction from Videos☆287Updated 7 months ago
- Project of "Adaptive Affine Transformation: A Simple and Effective Operation for Spatial Misaligned Image Generation"☆63Updated 2 years ago
- A novel apporach for personalized speech-driven 3D facial animation☆53Updated last year
- QPGesture: Quantization-Based and Phase-Guided Motion Matching for Natural Speech-Driven Gesture Generation (CVPR 2023 Highlight)☆91Updated 2 years ago
- Implementation of the deformation transfer paper and its application in generating all the ARkit facial blend shapes for any 3D face☆66Updated 3 years ago
- the dataset and code for "Flow-guided One-shot Talking Face Generation with a High-resolution Audio-visual Dataset"☆105Updated last year
- ☆33Updated 2 months ago
- wav2lip in a Vector Quantized (VQ) space☆28Updated 2 years ago
- Code for "Audio-Driven Co-Speech Gesture Video Generation" (NeurIPS 2022, Spotlight Presentation).☆87Updated 2 years ago
- Code for paper 'EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model'☆198Updated 2 years ago
- [CVPR 2024] FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models☆228Updated last year
- This is the official source for our ICCV 2023 paper "EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation"☆398Updated last year
- PyTorch implementation for NED (CVPR 2022). It can be used to manipulate the facial emotions of actors in videos based on emotion labels …☆159Updated 3 years ago
- ☆195Updated last year
- A python library to to fit 3D morphable models to images of faces and capture facial performance overtime with no markers or a special mo…☆76Updated last year
- A generative model of 3D facial details that can perform expression, age and wrinkle line editing (ECCV 2022).☆86Updated last year
- The Official PyTorch Implementation for Face2Face^ρ (ECCV2022)☆227Updated 2 years ago