HumanAIGC / EMOLinks
Emote Portrait Alive: Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions
☆7,649Updated last year
Alternatives and similar repositories for EMO
Users that are interested in EMO are comparing it to the libraries listed below
Sorting:
- AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation☆4,999Updated last year
- Character Animation (AnimateAnyone, Face Reenactment)☆3,430Updated last year
- MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoising☆2,768Updated last year
- Official implementation of DreaMoving☆1,802Updated last year
- V-Express aims to generate a talking head video under the control of a reference image, an audio, and a sequence of V-Kps images.☆2,352Updated 7 months ago
- [CVPR 2024] Official repository for "MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model"☆10,834Updated last year
- InstantID: Zero-shot Identity-Preserving Generation in Seconds 🔥☆11,787Updated last year
- Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models☆1,757Updated last year
- Official repo for VGen: a holistic video generation ecosystem for video generation building on diffusion models☆3,128Updated 7 months ago
- Official implementation of AnimateDiff.☆11,704Updated last year
- Official implementations for paper: Anydoor: zero-shot object-level image customization☆4,176Updated last year
- MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation☆2,586Updated 5 months ago
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors☆2,930Updated 11 months ago
- [AAAI 2025] Official implementation of "OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on"☆6,400Updated last year
- High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance☆2,437Updated last month
- [ACM MM 2024] This is the official code for "AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion …☆1,585Updated last year
- Outfit Anyone: Ultra-high quality virtual try-on for Any Clothing and Any Person☆5,940Updated last year
- Accepted as [NeurIPS 2024] Spotlight Presentation Paper☆6,333Updated 11 months ago
- [CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation☆13,146Updated last year
- Kolors Team☆4,523Updated 9 months ago
- ☆2,458Updated last year
- Code and dataset for photorealistic Codec Avatars driven from audio☆2,832Updated 11 months ago
- [AAAI 2025] EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning☆4,039Updated 3 weeks ago
- GUI-focused roop☆5,122Updated last year
- [SIGGRAPH Asia 2022] VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild☆7,122Updated last year
- Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation☆14,761Updated 6 months ago
- An intuitive GUI for GLIGEN that uses ComfyUI in the backend☆2,046Updated last year
- [CVPR 2025] StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text☆1,593Updated 5 months ago
- Let us democratise high-resolution generation! (CVPR 2024)☆2,025Updated last year
- 本项目基于SadTalkers实现视频唇形合成的Wav2lip。通过以视频文件方式进行语音驱动生成唇形,设置面部区域可配置的增强方式进行合成唇形(人脸)区域画面增强,提高生成唇形的清晰度。使用DAIN 插帧的DL算法对生成视频进行补帧,补充帧间合成唇形的动作过渡,使合成的唇…☆1,976Updated 2 years ago