Mocap Dataset of “Write-a-speaker: Text-based Emotional and Rhythmic Talking-head Generation”
☆161Oct 15, 2021Updated 4 years ago
Alternatives and similar repositories for Write-a-Speaker
Users that are interested in Write-a-Speaker are comparing it to the libraries listed below
Sorting:
- code for paper "Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion" in the conference of IJCAI 2021☆353Feb 15, 2024Updated 2 years ago
- the dataset and code for "Flow-guided One-shot Talking Face Generation with a High-resolution Audio-visual Dataset"☆424May 12, 2024Updated last year
- AI-generated-character☆477Oct 18, 2023Updated 2 years ago
- ☆208Mar 10, 2021Updated 4 years ago
- Live Speech Portraits: Real-Time Photorealistic Talking-Head Animation (SIGGRAPH Asia 2021)☆1,282Jun 19, 2023Updated 2 years ago
- Code for paper 'Audio-Driven Emotional Video Portraits'.☆314Mar 16, 2022Updated 3 years ago
- http://www.facegood.cc☆1,908Feb 8, 2023Updated 3 years ago
- Code for "Audio-driven Talking Face Video Generation with Learning-based Personalized Head Pose" (Arxiv 2020) and "Predicting Personalize…☆775Dec 15, 2023Updated 2 years ago
- Code for Pose-Controllable Talking Face Generation by Implicitly Modularized Audio-Visual Representation (CVPR 2021)☆961Jan 6, 2024Updated 2 years ago
- Code for One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning (AAAI 2022)☆359Jan 16, 2023Updated 3 years ago
- This github contains the network architectures of NeuralVoicePuppetry.☆179Jun 12, 2020Updated 5 years ago
- This repository contains a PyTorch implementation of "AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis".☆1,068Oct 27, 2023Updated 2 years ago
- MEAD: A Large-scale Audio-visual Dataset for Emotional Talking-face Generation [ECCV2020]☆293Jul 7, 2024Updated last year
- Code for MeshTalk: 3D Face Animation from Speech using Cross-Modality Disentanglement☆399Oct 3, 2022Updated 3 years ago
- [CVPR 2022] FaceFormer: Speech-Driven 3D Facial Animation with Transformers☆905Aug 22, 2023Updated 2 years ago
- AudioDVP:Photorealistic Audio-driven Video Portraits☆301Feb 27, 2024Updated 2 years ago
- ☆94Aug 7, 2021Updated 4 years ago
- ☆50Dec 8, 2022Updated 3 years ago
- [ECCV2022] The implementation for "Learning Dynamic Facial Radiance Fields for Few-Shot Talking Head Synthesis".☆342Jan 10, 2023Updated 3 years ago
- The code for the paper "Speech Driven Talking Face Generation from a Single Image and an Emotion Condition"☆172Apr 9, 2023Updated 2 years ago
- ☆521Aug 14, 2025Updated 6 months ago
- ACCV 2020 "Speech2Video Synthesis with 3D Skeleton Regularization and Expressive Body Poses"☆100Feb 27, 2026Updated last week
- [ECCV 2022] StyleHEAT: A framework for high-resolution editable talking face generation☆657Mar 26, 2023Updated 2 years ago
- ☆526Dec 26, 2023Updated 2 years ago
- ☆105Jul 5, 2023Updated 2 years ago
- Extension of Wav2Lip repository for processing high-quality videos.☆549Feb 7, 2023Updated 3 years ago
- Pytorch implementation of paper "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing"☆851Apr 19, 2022Updated 3 years ago
- Official github repo for paper "What comprises a good talking-head video generation?: A Survey and Benchmark"☆91Dec 8, 2022Updated 3 years ago
- ☆72Jun 4, 2023Updated 2 years ago
- A repository for generating stylized talking 3D and 3D face☆279Nov 11, 2021Updated 4 years ago
- G2pw's inference speed is accelerated by about 8-10 times. Change loop generated predictive data to only once and model loop prediction b…☆14Dec 30, 2023Updated 2 years ago
- ICASSP 2022: "Text2Video: text-driven talking-head video synthesis with phonetic dictionary".☆441Jun 4, 2023Updated 2 years ago
- This codebase demonstrates how to synthesize realistic 3D character animations given an arbitrary speech signal and a static character me…☆1,253Aug 20, 2024Updated last year
- Wav2Lip-Emotion extends Wav2Lip to modify facial expressions of emotions via L1 reconstruction and pre-trained emotion objectives. We als…☆97May 23, 2022Updated 3 years ago
- Code for paper 'EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model'☆201Apr 28, 2023Updated 2 years ago
- CVPR 2019☆259May 24, 2023Updated 2 years ago
- ☆1,030Mar 20, 2024Updated last year
- FACIAL: Synthesizing Dynamic Talking Face With Implicit Attribute Learning. ICCV, 2021.☆383Jun 30, 2022Updated 3 years ago
- implementation based on "Audio-Driven Facial Animation by Joint End-to-End Learning of Pose and Emotion"☆163Apr 7, 2020Updated 5 years ago