magic-research / magic-animateLinks
[CVPR 2024] Official repository for "MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model"
☆10,887Updated 4 months ago
Alternatives and similar repositories for magic-animate
Users that are interested in magic-animate are comparing it to the libraries listed below
Sorting:
- Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation☆14,797Updated 3 months ago
- Character Animation (AnimateAnyone, Face Reenactment)☆3,473Updated last year
- Outfit Anyone: Ultra-high quality virtual try-on for Any Clothing and Any Person☆5,968Updated last year
- Official implementation of DreaMoving☆1,801Updated last year
- Official implementation of AnimateDiff.☆11,961Updated last year
- Official implementations for paper: Anydoor: zero-shot object-level image customization☆4,204Updated last year
- AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation☆5,022Updated last year
- ☆2,458Updated last year
- Convert your videos to densepose and use it on MagicAnimate☆1,102Updated 2 years ago
- [SIGGRAPH Asia 2022] VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild☆7,200Updated last year
- Official repo for VGen: a holistic video generation ecosystem for video generation building on diffusion models☆3,148Updated 11 months ago
- VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models☆5,013Updated last year
- [CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation☆13,483Updated last year
- Let us democratise high-resolution generation! (CVPR 2024)☆2,036Updated 2 months ago
- Emote Portrait Alive: Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions☆7,650Updated last year
- MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation☆2,638Updated 9 months ago
- MagicEdit: High-Fidelity Temporally Coherent Video Editing☆1,808Updated 2 years ago
- The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.☆6,387Updated last year
- Unofficial Implementation of Animate Anyone☆2,935Updated last year
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors☆2,978Updated last year
- Code and dataset for photorealistic Codec Avatars driven from audio☆2,849Updated last year
- StreamDiffusion: A Pipeline-Level Solution for Real-Time Interactive Generation☆10,563Updated last year
- [SIGGRAPH Asia 2023] Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation☆3,004Updated last year
- Official implementation code of the paper <AnyText: Multilingual Visual Text Generation And Editing>☆4,828Updated 9 months ago
- FaceChain is a deep-learning toolchain for generating your Digital-Twin.☆9,500Updated 6 months ago
- Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference☆4,598Updated last year
- An intuitive GUI for GLIGEN that uses ComfyUI in the backend☆2,049Updated last year
- [ECCV2024] IDM-VTON : Improving Diffusion Models for Authentic Virtual Try-on in the Wild☆4,811Updated 9 months ago
- InstantID: Zero-shot Identity-Preserving Generation in Seconds 🔥☆11,891Updated last year
- [AAAI 2025] Official implementation of "OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on"☆6,497Updated last year