magic-research / magic-animate
[CVPR 2024] MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model
☆10,679Updated 7 months ago
Alternatives and similar repositories for magic-animate:
Users that are interested in magic-animate are comparing it to the libraries listed below
- Animate Anyone: Consistent and Controllable Image-to-Video Synthesis for Character Animation☆14,598Updated this week
- Official implementation of AnimateDiff.☆10,975Updated 6 months ago
- Outfit Anyone: Ultra-high quality virtual try-on for Any Clothing and Any Person☆5,751Updated 6 months ago
- Official implementation of DreaMoving☆1,801Updated last year
- Character Animation (AnimateAnyone, Face Reenactment)☆3,300Updated 8 months ago
- [SIGGRAPH Asia 2023] Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation☆2,983Updated 11 months ago
- InstantID: Zero-shot Identity-Preserving Generation in Seconds 🔥☆11,381Updated 6 months ago
- Official implementations for paper: Anydoor: zero-shot object-level image customization☆4,084Updated 10 months ago
- Accepted as [NeurIPS 2024] Spotlight Presentation Paper☆6,160Updated 4 months ago
- Let us democratise high-resolution generation! (CVPR 2024)☆1,993Updated 9 months ago
- Unofficial Implementation of Animate Anyone☆2,898Updated 7 months ago
- ☆2,411Updated 8 months ago
- Official implementation of OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on☆6,032Updated 9 months ago
- GUI-focused roop☆4,829Updated 8 months ago
- [SIGGRAPH Asia 2022] VideoReTalking: Audio-based Lip Synchronization for Talking Head Video Editing In the Wild☆6,857Updated 6 months ago
- StreamDiffusion: A Pipeline-Level Solution for Real-Time Interactive Generation☆9,988Updated 2 months ago
- FaceChain is a deep-learning toolchain for generating your Digital-Twin.☆9,266Updated 2 months ago
- The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.☆5,605Updated 7 months ago
- [CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation☆12,299Updated 7 months ago
- MagicEdit: High-Fidelity Temporally Coherent Video Editing☆1,794Updated last year
- VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models☆4,675Updated 7 months ago
- Emote Portrait Alive: Generating Expressive Portrait Videos with Audio2Video Diffusion Model under Weak Conditions☆7,577Updated 5 months ago
- Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference☆4,442Updated 7 months ago
- Official repo for VGen: a holistic video generation ecosystem for video generation building on diffusion models☆3,053Updated last month
- AnimateDiff for AUTOMATIC1111 Stable Diffusion WebUI☆3,201Updated 4 months ago
- Convert your videos to densepose and use it on MagicAnimate☆1,088Updated last year
- Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor…☆21,463Updated 3 weeks ago
- [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.☆3,523Updated last year
- Foundational Models for State-of-the-Art Speech and Text Translation☆11,303Updated 2 months ago
- PhotoMaker [CVPR 2024]☆9,774Updated 3 months ago