junhwanjang / visemenet-inferenceLinks
3D Avatar Lip Synchronization from speech (JALI based face-rigging)
☆82Updated 3 years ago
Alternatives and similar repositories for visemenet-inference
Users that are interested in visemenet-inference are comparing it to the libraries listed below
Sorting:
- ☆95Updated 3 years ago
- Blender add-on to implement VOCA neural network.☆59Updated 2 years ago
- Audio-driven facial animation generator with BiLSTM used for transcribing the speech and web interface displaying the avatar and the anim…☆35Updated 2 years ago
- SAiD: Blendshape-based Audio-Driven Speech Animation with Diffusion☆104Updated last year
- ☆101Updated last year
- ☆162Updated last year
- ☆193Updated last year
- ☆195Updated 3 years ago
- This is the repository for EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation☆126Updated last year
- ☆44Updated last year
- Speech-Driven Expression Blendshape Based on Single-Layer Self-attention Network (AIWIN 2022)☆76Updated 2 years ago
- Audio2Motion Official implementation for Audio2Motion: Generating Diverse Gestures from Speech with Conditional Variational Autoencoders.☆135Updated last year
- Audio2Face Avatar with Riva SDK functionality☆73Updated 2 years ago
- implementation based on "Audio-Driven Facial Animation by Joint End-to-End Learning of Pose and Emotion"☆162Updated 5 years ago
- ☆123Updated last year
- Web-first SDK that provides real-time ARKit-compatible 52 blend shapes from a camera feed, video or image at 60 FPS using ML models.☆84Updated 2 years ago
- Freeform Body Motion Generation from Speech☆203Updated 2 years ago
- ☆84Updated 3 months ago
- ☆23Updated last year
- FaceFormer Emo: Speech-Driven 3D Facial Animation with Emotion Embedding☆26Updated last year
- This is the official implementation of the paper "Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial A…☆46Updated 2 years ago
- ☆72Updated last year
- DiffSpeaker: Speech-Driven 3D Facial Animation with Diffusion Transformer☆158Updated last year
- This is the official source for our ICCV 2023 paper "EmoTalk: Speech-Driven Emotional Disentanglement for 3D Face Animation"☆372Updated last year
- Speech to Facial Animation using GANs☆40Updated 3 years ago
- The official implementation for ICMI 2020 Best Paper Award "Gesticulator: A framework for semantically-aware speech-driven gesture gener…☆127Updated 2 years ago
- Code for MeshTalk: 3D Face Animation from Speech using Cross-Modality Disentanglement☆384Updated 2 years ago
- Code for the project: "Audio-Driven Video-Synthesis of Personalised Moderations"☆20Updated last year
- CLI tool for recording or replaying Epic Games' live link face capture frames.☆80Updated last year
- Code for paper 'EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model'☆193Updated 2 years ago