HumanAIGC / EMO
Emote Portrait Alive: Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions
☆7,616Updated 7 months ago
Alternatives and similar repositories for EMO:
Users that are interested in EMO are comparing it to the libraries listed below
- AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation☆4,931Updated 9 months ago
- MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoising☆2,680Updated 9 months ago
- VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models☆4,799Updated 9 months ago
- [SIGGRAPH Asia 2022] VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild☆6,974Updated 8 months ago
- [CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation☆12,596Updated 9 months ago
- MagicEdit: High-Fidelity Temporally Coherent Video Editing☆1,798Updated last year
- Character Animation (AnimateAnyone, Face Reenactment)☆3,368Updated 10 months ago
- Unofficial Implementation of Animate Anyone☆2,930Updated 9 months ago
- Official implementation of AnimateDiff.☆11,284Updated 8 months ago
- [CVPR 2024] MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model☆10,719Updated 9 months ago
- Official implementation of DreaMoving☆1,803Updated last year
- Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models☆1,707Updated last year
- This project aim to reproduce Sora (Open AI T2V model), we wish the open source community contribute to this project.☆11,939Updated 2 weeks ago
- InstantID: Zero-shot Identity-Preserving Generation in Seconds 🔥☆11,549Updated 9 months ago
- Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation☆14,725Updated 2 months ago
- The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.☆5,847Updated 9 months ago
- MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation☆2,519Updated last month
- Accepted as [NeurIPS 2024] Spotlight Presentation Paper☆6,267Updated 6 months ago
- Official Code for Stable Cascade☆6,595Updated 8 months ago
- GeneFace: Generalized and High-Fidelity 3D Talking Face Synthesis; ICLR 2023; Official code☆2,601Updated 6 months ago
- V-Express aims to generate a talking head video under the control of a reference image, an audio, and a sequence of V-Kps images.☆2,324Updated 2 months ago
- Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance☆4,193Updated 9 months ago
- [AAAI 2025] Official implementation of "OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on"☆6,202Updated 11 months ago
- Outfit Anyone: Ultra-high quality virtual try-on for Any Clothing and Any Person☆5,840Updated 8 months ago
- MuseTalk: Real-Time High Quality Lip Synchorization with Latent Space Inpainting☆3,935Updated this week
- StreamDiffusion: A Pipeline-Level Solution for Real-Time Interactive Generation☆10,139Updated 4 months ago
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors☆2,834Updated 7 months ago
- [ACM MM 2024] This is the official code for "AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion …☆1,561Updated 8 months ago
- GUI-focused roop☆4,951Updated 10 months ago
- Official repo for VGen: a holistic video generation ecosystem for video generation building on diffusion models☆3,098Updated 3 months ago