OpenTalker / StyleHEATLinks
[ECCV 2022] StyleHEAT: A framework for high-resolution editable talking face generation
☆654Updated 2 years ago
Alternatives and similar repositories for StyleHEAT
Users that are interested in StyleHEAT are comparing it to the libraries listed below
Sorting:
- The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"☆547Updated 3 years ago
- [CVPR 2023] DPE: Disentanglement of Pose and Expression for General Video Portrait Editing☆451Updated last year
- papers about Face Reenactment/Talking Face Generation☆449Updated last year
- [CVPR 2023] MetaPortrait: Identity-Preserving Talking Head Generation with Fast Personalized Adaptation☆545Updated 2 years ago
- ☆519Updated last year
- [CVPR 2023] CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior☆593Updated 2 years ago
- FACIAL: Synthesizing Dynamic Talking Face With Implicit Attribute Learning. ICCV, 2021.☆384Updated 3 years ago
- Pytorch implementation of paper "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing"☆845Updated 3 years ago
- Official repository for Diffused Heads: Diffusion Models Beat GANs on Talking-Face Generation☆487Updated last year
- the dataset and code for "Flow-guided One-shot Talking Face Generation with a High-resolution Audio-visual Dataset"☆406Updated last year
- [CVPR2023] The implementation for "DiffTalk: Crafting Diffusion Models for Generalized Audio-Driven Portraits Animation"☆471Updated last year
- [ECCV2022] The implementation for "Learning Dynamic Facial Radiance Fields for Few-Shot Talking Head Synthesis".☆342Updated 2 years ago
- ☆424Updated last year
- The official code of our ICCV2023 work: Implicit Identity Representation Conditioned Memory Compensation Network for Talking Head video G…☆254Updated 2 years ago
- Official code of CVPR '23 paper "StyleSync: High-Fidelity Generalized and Personalized Lip Sync in Style-based Generator"☆322Updated 2 years ago
- Code for One-shot Talking Face Generation from Single-speaker Audio-Visual Correlation Learning (AAAI 2022)