michaelzhang-ai / vid2vidLinks
A modified version of vid2vid for Speech2Video, Text2Video Paper
☆35Updated 2 years ago
Alternatives and similar repositories for vid2vid
Users that are interested in vid2vid are comparing it to the libraries listed below
Sorting:
- Code for ACCV 2020 "Speech2Video Synthesis with 3D Skeleton Regularization and Expressive Body Poses"☆100Updated 4 years ago
- Audio driven video synthesis☆41Updated 2 years ago
- Wav2Lip-Emotion extends Wav2Lip to modify facial expressions of emotions via L1 reconstruction and pre-trained emotion objectives. We als…☆96Updated 3 years ago
- ☆95Updated 4 years ago
- Mocap Dataset of “Write-a-speaker: Text-based Emotional and Rhythmic Talking-head Generation”☆160Updated 3 years ago
- code for paper "Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion" in the conference of IJCAI 2021☆8Updated 3 years ago
- SyncTalkFace: Talking Face Generation for Precise Lip-syncing via Audio-Lip Memory☆33Updated 2 years ago
- Spliting the ASR probability distribution results into the chinese pinyin, so as to extract more effective feature for the chinese speech…☆21Updated 2 years ago
- Cloned repository from Hugging Face Spaces (CVPR 2022 Demo)☆54Updated 2 years ago
- Learning Lip Sync of Obama from Speech Audio☆66Updated 4 years ago
- Code for the project: "Audio-Driven Video-Synthesis of Personalised Moderations"☆20Updated last year
- Speech to Facial Animation using GANs☆40Updated 3 years ago
- ☆123Updated last year
- An improved version of APB2Face: Real-Time Audio-Guided Multi-Face Reenactment☆82Updated 3 years ago
- ☆34Updated 3 years ago
- A repository for generating stylized talking 3D and 3D face☆279Updated 3 years ago
- The code for the paper "Speech Driven Talking Face Generation from a Single Image and an Emotion Condition"☆170Updated 2 years ago
- This github contains the network architectures of NeuralVoicePuppetry.☆178Updated 5 years ago
- Code for paper 'EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model'☆194Updated 2 years ago
- Audio-Visual Lip Synthesis via Intermediate Landmark Representation☆18Updated 2 years ago
- Audio-driven facial animation generator with BiLSTM used for transcribing the speech and web interface displaying the avatar and the anim…☆35Updated 2 years ago
- 基于DINet的推理服务,推理视频流和视频☆16Updated last year
- Psyche AI Inc release source "CVCUDA_FaceStoreHelper"☆67Updated last year
- Official pytorch implementation for "APB2Face: Audio-guided face reenactment with auxiliary pose and blink signals", ICASSP'20☆65Updated 3 years ago
- ☆8Updated last year
- FaceFormer Emo: Speech-Driven 3D Facial Animation with Emotion Embedding☆27Updated last year
- Talking Face Generation system☆19Updated last year
- An updated version of virtual model making☆92Updated 3 years ago
- 📖 A curated list of resources dedicated to avatar.☆59Updated 7 months ago
- Aim to accelerate the image-animation-model inference through the inference frameworks such as onnx、tensorrt and openvino.☆76Updated last year