HumanAIGC / EMOLinks
Emote Portrait Alive: Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions
☆7,634Updated 10 months ago
Alternatives and similar repositories for EMO
Users that are interested in EMO are comparing it to the libraries listed below
Sorting:
- AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation☆4,956Updated 11 months ago
- [CVPR 2024] Official repository for "MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model"☆10,742Updated 11 months ago
- MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoising☆2,726Updated 11 months ago
- MuseTalk: Real-Time High Quality Lip Synchorization with Latent Space Inpainting☆4,310Updated last month
- Outfit Anyone: Ultra-high quality virtual try-on for Any Clothing and Any Person☆5,893Updated 10 months ago
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors☆2,877Updated 9 months ago
- Kolors Team☆4,462Updated 7 months ago
- [SIGGRAPH Asia 2022] VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild☆7,078Updated 10 months ago
- Official implementation of DreaMoving☆1,803Updated last year
- MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation☆2,554Updated 3 months ago
- Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models☆1,735Updated last year
- Official implementation of AnimateDiff.☆11,502Updated 10 months ago
- High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance☆2,375Updated 8 months ago
- InstantID: Zero-shot Identity-Preserving Generation in Seconds 🔥☆11,668Updated 11 months ago
- [AAAI 2025] Official implementation of "OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on"☆6,320Updated last year
- V-Express aims to generate a talking head video under the control of a reference image, an audio, and a sequence of V-Kps images.☆2,342Updated 4 months ago
- [CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation☆12,890Updated 11 months ago
- [AAAI 2025] EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning☆3,938Updated 6 months ago
- Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation☆14,736Updated 4 months ago
- VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models☆4,865Updated 11 months ago
- Character Animation (AnimateAnyone, Face Reenactment)☆3,401Updated last year
- Official repo for VGen: a holistic video generation ecosystem for video generation building on diffusion models☆3,109Updated 5 months ago
- Accepted as [NeurIPS 2024] Spotlight Presentation Paper☆6,305Updated 8 months ago
- [ACM MM 2024] This is the official code for "AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion …☆1,580Updated 10 months ago
- Unofficial Implementation of Animate Anyone☆2,930Updated 11 months ago
- [ECCV2024] IDM-VTON : Improving Diffusion Models for Authentic Virtual Try-on in the Wild☆4,556Updated 3 months ago
- Official implementations for paper: Anydoor: zero-shot object-level image customization☆4,156Updated last year
- GeneFace: Generalized and High-Fidelity 3D Talking Face Synthesis; ICLR 2023; Official code☆2,608Updated 8 months ago
- FaceChain is a deep-learning toolchain for generating your Digital-Twin.☆9,438Updated 2 weeks ago
- Convert your videos to densepose and use it on MagicAnimate☆1,092Updated last year