tencent-ailab / V-ExpressLinks
V-Express aims to generate a talking head video under the control of a reference image, an audio, and a sequence of V-Kps images.
☆2,334Updated 4 months ago
Alternatives and similar repositories for V-Express
Users that are interested in V-Express are comparing it to the libraries listed below
Sorting:
- MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoising☆2,712Updated 11 months ago
- Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models☆1,730Updated last year
- MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation☆2,549Updated 3 months ago
- [ACM MM 2024] This is the official code for "AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion …☆1,576Updated 9 months ago
- AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation☆4,950Updated 11 months ago
- High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance☆2,371Updated 8 months ago
- MuseTalk: Real-Time High Quality Lip Synchorization with Latent Space Inpainting☆4,223Updated last month
- [ECCV 2024] Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance☆4,204Updated 10 months ago
- [CVPR 2024] This is the official source for our paper "SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis"☆1,465Updated 9 months ago
- [CVPR 2025] StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text☆1,561Updated 2 months ago
- Unofficial Implementation of Animate Anyone by Novita AI☆776Updated last year
- Character Animation (AnimateAnyone, Face Reenactment)☆3,394Updated last year
- GeneFace++: Generalized and Stable Real-Time 3D Talking Face Generation; Official Code☆1,700Updated 7 months ago
- VividTalk: One-Shot Audio-Driven Talking Head Generation Based on 3D Hybrid Prior☆787Updated last year
- Code and dataset for photorealistic Codec Avatars driven from audio☆2,807Updated 8 months ago
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors☆2,861Updated 8 months ago
- Unofficial Implementation of Animate Anyone☆2,928Updated 10 months ago
- Official repo for VGen: a holistic video generation ecosystem for video generation building on diffusion models☆3,107Updated 4 months ago
- [AAAI 2025] EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning☆3,896Updated 5 months ago
- Diffusion-based Portrait and Animal Animation☆780Updated 3 months ago
- Official implementation of DreaMoving☆1,799Updated last year
- Official implementation of Magic Clothing: Controllable Garment-Driven Image Synthesis☆1,507Updated 10 months ago
- Real3D-Portrait: One-shot Realistic 3D Talking Portrait Synthesis; ICLR 2024 Spotlight; Official code☆1,035Updated 7 months ago
- Wav2Lip UHQ extension for Automatic1111☆1,378Updated 11 months ago
- Bring portraits to life in Real Time!onnx/tensorrt support!实时肖像驱动!☆888Updated 3 months ago
- A simple and open-source analogue of the HeyGen system☆955Updated 10 months ago
- InstantStyle: Free Lunch towards Style-Preserving in Text-to-Image Generation 🔥☆1,919Updated 8 months ago
- [CVPR 2025] EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation☆3,827Updated 3 months ago
- [ICCV'23] Efficient Region-Aware Neural Radiance Fields for High-Fidelity Talking Portrait Synthesis☆1,180Updated 2 months ago
- [ICLR 2025] Hallo2: Long-Duration and High-Resolution Audio-driven Portrait Image Animation☆3,567Updated 3 months ago