guoqincode / Open-AnimateAnyoneLinks
Unofficial Implementation of Animate Anyone
β2,936Updated last year
Alternatives and similar repositories for Open-AnimateAnyone
Users that are interested in Open-AnimateAnyone are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidanceβ4,231Updated last year
- [TPAMI 2025π₯] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulatorsβ1,333Updated 3 weeks ago
- Character Animation (AnimateAnyone, Face Reenactment)β3,430Updated last year
- [ECCV 2024] The official implementation of paper "BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed Dual-Branch Diffusion"β1,647Updated 7 months ago
- Official implementation of DreaMovingβ1,802Updated last year
- Official repo for VGen: a holistic video generation ecosystem for video generation building on diffusion modelsβ3,128Updated 7 months ago
- Customized ID Consistent for humanβ972Updated 5 months ago
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priorsβ2,921Updated 11 months ago
- MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generationβ2,583Updated 5 months ago
- Convert your videos to densepose and use it on MagicAnimateβ1,099Updated last year
- MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoisingβ2,765Updated last year
- Official implementations for paper: Anydoor: zero-shot object-level image customizationβ4,174Updated last year
- Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Modelsβ1,757Updated last year
- Let us democratise high-resolution generation! (CVPR 2024)β2,023Updated last year
- [ECCV 2024 Oral] MotionDirector: Motion Customization of Text-to-Video Diffusion Models.β1,023Updated 11 months ago
- Code for SCIS-2025 Paper "UniAnimate: Taming Unified Video Diο¬usion Models for Consistent Human Image Animation".β1,159Updated 4 months ago
- InstantStyle: Free Lunch towards Style-Preserving in Text-to-Image Generation π₯β1,960Updated 10 months ago
- [ICML 2024] Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs (RPG)β1,823Updated 6 months ago
- [ICLR 2025] Hallo2: Long-Duration and High-Resolution Audio-driven Portrait Image Animationβ3,604Updated 5 months ago
- Official Pytorch Implementation for "TokenFlow: Consistent Diffusion Features for Consistent Video Editing" presenting "TokenFlow" (ICLR β¦β1,670Updated 6 months ago
- [CVPR 2025] StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Textβ1,589Updated 4 months ago
- [ICCV 2023] StableVideo: Text-driven Consistency-aware Diffusion Video Editingβ1,438Updated last year
- High-Quality Human Motion Video Generation with Confidence-aware Pose Guidanceβ2,430Updated 2 weeks ago
- AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animationβ4,999Updated last year
- Transparent Image Layer Diffusion using Latent Transparencyβ2,150Updated last year
- [ACM MM 2024] This is the official code for "AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion β¦β1,588Updated last year
- Lumina-T2X is a unified framework for Text to Any Modality Generationβ2,214Updated 6 months ago
- MagicEdit: High-Fidelity Temporally Coherent Video Editingβ1,800Updated last year
- β2,459Updated last year
- V-Express aims to generate a talking head video under the control of a reference image, an audio, and a sequence of V-Kps images.β2,349Updated 6 months ago