tencent-ailab / V-ExpressLinks
V-Express aims to generate a talking head video under the control of a reference image, an audio, and a sequence of V-Kps images.
☆2,360Updated 11 months ago
Alternatives and similar repositories for V-Express
Users that are interested in V-Express are comparing it to the libraries listed below
Sorting:
- MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation☆2,638Updated 9 months ago
- MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoising☆2,806Updated last year
- [ACM MM 2024] This is the official code for "AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion …☆1,605Updated last year
- Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models☆1,786Updated last year
- AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation☆5,023Updated last year
- High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance☆2,495Updated last month
- MuseTalk: Real-Time High Quality Lip Synchorization with Latent Space Inpainting☆5,130Updated 3 months ago
- [CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis"☆1,604Updated 3 months ago
- Character Animation (AnimateAnyone, Face Reenactment)☆3,470Updated last year
- [AAAI 2025] EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning☆4,157Updated 4 months ago
- VividTalk: One-Shot Audio-Driven Talking Head Generation Based on 3D Hybrid Prior☆803Updated 2 years ago
- [CVPR 2025] StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text☆1,619Updated 9 months ago
- GeneFace++: Generalized and Stable Real-Time 3D Talking Face Generation; Official Code☆1,796Updated last year
- Convert your videos to densepose and use it on MagicAnimate☆1,102Updated 2 years ago
- Official implementation of Magic Clothing: Controllable Garment-Driven Image Synthesis☆1,536Updated last year
- Official implementation of DreaMoving☆1,801Updated last year
- Unofficial Implementation of Animate Anyone by Novita AI☆782Updated last year
- [CVPR 2025] EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation☆4,433Updated 4 months ago
- [NeurIPS 2024] Official code for PuLID: Pure and Lightning ID Customization via Contrastive Alignment☆3,505Updated 5 months ago
- 📺 An End-to-End Solution for High-Resolution and Long Video Generation Based on Transformer Diffusion☆2,242Updated 9 months ago
- Code and dataset for photorealistic Codec Avatars driven from audio☆2,846Updated last year
- Accepted as [NeurIPS 2024] Spotlight Presentation Paper☆6,372Updated last year
- Wav2Lip UHQ extension for Automatic1111☆1,414Updated last year
- A simple and open-source analogue of the HeyGen system☆983Updated last year
- Unofficial Implementation of Animate Anyone☆2,935Updated last year
- Official implementation of "MIMO: Controllable Character Video Synthesis with Spatial Decomposed Modeling"☆1,563Updated 6 months ago
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors☆2,978Updated last year
- 本项目基于SadTalkers实现视频唇形合成的Wav2lip。通过以视频文件方式进行语音驱动生成唇形,设置面部区域可配置的增强方式进行合成唇形(人脸)区域画面增强,提高生成唇形的清晰度。使用DAIN 插帧的DL算法对生成视频进行补帧,补充帧间合成唇形的动作过渡,使合成的唇…☆2,000Updated 2 years ago
- Kolors Team☆4,587Updated last year
- Diffusion-based Portrait and Animal Animation☆849Updated 3 weeks ago