zhongshaoyy / Audio2Face
☆95Updated 3 years ago
Alternatives and similar repositories for Audio2Face:
Users that are interested in Audio2Face are comparing it to the libraries listed below
- 3D Avatar Lip Synchronization from speech (JALI based face-rigging)☆80Updated 3 years ago
- Speech-Driven Expression Blendshape Based on Single-Layer Self-attention Network (AIWIN 2022)☆76Updated 2 years ago
- implementation based on "Audio-Driven Facial Animation by Joint End-to-End Learning of Pose and Emotion"☆162Updated 5 years ago
- Audio-driven facial animation generator with BiLSTM used for transcribing the speech and web interface displaying the avatar and the anim…☆35Updated 2 years ago
- Audio2Motion Official implementation for Audio2Motion: Generating Diverse Gestures from Speech with Conditional Variational Autoencoders.☆131Updated last year
- Blender add-on to implement VOCA neural network.☆59Updated 2 years ago
- Code for MeshTalk: 3D Face Animation from Speech using Cross-Modality Disentanglement☆384Updated 2 years ago
- ☆100Updated last year
- Mocap Dataset of “Write-a-speaker: Text-based Emotional and Rhythmic Talking-head Generation”☆160Updated 3 years ago
- Freeform Body Motion Generation from Speech☆201Updated 2 years ago
- A repository for generating stylized talking 3D and 3D face☆279Updated 3 years ago
- This is the repository for EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation☆124Updated last year
- Code for ACCV 2020 "Speech2Video Synthesis with 3D Skeleton Regularization and Expressive Body Poses"☆100Updated 4 years ago
- PyTorch implementation of "Towards Accurate Facial Motion Retargeting with Identity-Consistent and Expression-Exclusive Constraints" (AAA…☆97Updated 2 years ago
- Code for paper 'EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model'☆195Updated last year
- The code for the paper "Speech Driven Talking Face Generation from a Single Image and an Emotion Condition"☆170Updated 2 years ago
- ☆196Updated 3 years ago
- ☆124Updated 11 months ago
- ☆44Updated last year
- ☆155Updated last year
- This github contains the network architectures of NeuralVoicePuppetry.☆80Updated 4 years ago
- Web-first SDK that provides real-time ARKit-compatible 52 blend shapes from a camera feed, video or image at 60 FPS using ML models.☆84Updated 2 years ago
- CLI tool for recording or replaying Epic Games' live link face capture frames.☆80Updated last year
- This is the official source for our ACM MM 2023 paper "SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking …☆136Updated last year
- [ECCV2022] The implementation for "Learning Dynamic Facial Radiance Fields for Few-Shot Talking Head Synthesis".☆341Updated 2 years ago
- A python library to to fit 3D morphable models to images of faces and capture facial performance overtime with no markers or a special mo…☆75Updated last year
- BlendShapeMaker python3.6☆44Updated 3 years ago
- Implementation of the deformation transfer paper and its application in generating all the ARkit facial blend shapes for any 3D face☆66Updated 3 years ago
- Official Pytorch Implementation of SPECTRE: Visual Speech-Aware Perceptual 3D Facial Expression Reconstruction from Videos☆273Updated 3 weeks ago
- The Official PyTorch Implementation for Face2Face^ρ (ECCV2022)☆222Updated last year