leventt / surat
implementation based on "Audio-Driven Facial Animation by Joint End-to-End Learning of Pose and Emotion"
☆162Updated 4 years ago
Alternatives and similar repositories for surat:
Users that are interested in surat are comparing it to the libraries listed below
- ☆196Updated 3 years ago
- ☆94Updated 3 years ago
- ☆205Updated 4 years ago
- Code for the paper "End-to-end Learning for 3D Facial Animation from Speech"☆71Updated 2 years ago
- Code for MeshTalk: 3D Face Animation from Speech using Cross-Modality Disentanglement☆384Updated 2 years ago
- A repository for generating stylized talking 3D and 3D face☆279Updated 3 years ago
- This github contains the network architectures of NeuralVoicePuppetry.☆177Updated 4 years ago
- Code for paper 'Audio-Driven Emotional Video Portraits'.☆307Updated 3 years ago
- This is the official implementation of the paper "Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial A…☆45Updated 2 years ago
- This is the official implementation for IVA '19 paper "Analyzing Input and Output Representations for Speech-Driven Gesture Generation".☆109Updated last year
- This github contains the network architectures of NeuralVoicePuppetry.☆79Updated 4 years ago
- Official Pytorch Implementation of SPECTRE: Visual Speech-Aware Perceptual 3D Facial Expression Reconstruction from Videos☆270Updated 7 months ago
- Official pytorch implementation for "APB2Face: Audio-guided face reenactment with auxiliary pose and blink signals", ICASSP'20☆64Updated 3 years ago
- Convert from Basel Face Model (BFM) to the FLAME head model☆430Updated 2 years ago
- A Deep Learning Approach for Generalized Speech Animation☆32Updated 7 years ago
- Speech-Driven Expression Blendshape Based on Single-Layer Self-attention Network (AIWIN 2022)☆76Updated 2 years ago
- Official github repo for paper "What comprises a good talking-head video generation?: A Survey and Benchmark"☆90Updated 2 years ago
- Generating Talking Face Landmarks from Speech☆158Updated 2 years ago
- An improved version of APB2Face: Real-Time Audio-Guided Multi-Face Reenactment☆82Updated 3 years ago
- ☆50Updated 2 years ago
- This is the source code of our 3DRW 2019 paper☆80Updated 2 years ago
- ☆166Updated 4 years ago
- The Official PyTorch Implementation for Face2Face^ρ (ECCV2022)☆222Updated last year
- Code for ACCV 2020 "Speech2Video Synthesis with 3D Skeleton Regularization and Expressive Body Poses"☆100Updated 3 years ago
- ☆99Updated last year
- Audio2Motion Official implementation for Audio2Motion: Generating Diverse Gestures from Speech with Conditional Variational Autoencoders.☆129Updated last year
- CVPR 2022: Cross-Modal Perceptionist: Can Face Geometry be Gleaned from Voices?☆129Updated 3 months ago
- Project Page of 'Synthesizing Coupled 3D Face Modalities by Trunk-Branch Generative Adversarial Networks'☆246Updated 3 years ago
- code for training the models from the paper "Learning Individual Styles of Conversational Gestures"☆381Updated last year
- a method for generating facial blendshape rigs from a set of example poses of a CG character☆86Updated 3 years ago