aigc-apps / EasyAnimate
πΊ An End-to-End Solution for High-Resolution and Long Video Generation Based on Transformer Diffusion
β1,764Updated this week
Alternatives and similar repositories for EasyAnimate:
Users that are interested in EasyAnimate are comparing it to the libraries listed below
- Controllable video and image Generation, SVD, Animate Anyone, ControlNet, ControlNeXt, LoRAβ1,493Updated 4 months ago
- StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Textβ1,486Updated last month
- πΉ A more flexible CogVideoX that can generate videos at any resolution and creates videos from images.β623Updated last month
- Official repository of In-Context LoRA for Diffusion Transformersβ1,512Updated last month
- A minimal and universal controller for FLUX.1.β1,129Updated this week
- High-Quality Human Motion Video Generation with Confidence-aware Pose Guidanceβ2,137Updated 4 months ago
- Lumina-T2X is a unified framework for Text to Any Modality Generationβ2,132Updated 5 months ago
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priorsβ2,733Updated 4 months ago
- MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generationβ2,410Updated 5 months ago
- Latte: Latent Diffusion Transformer for Video Generation.β1,762Updated this week
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.β702Updated last month
- PixArt-Ξ£: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generationβ1,746Updated 2 months ago
- β1,843Updated 2 months ago
- [AAAI 2025] Follow-Your-Click: This repo is the official implementation of "Follow-Your-Click: Open-domain Regional Image Animation via Sβ¦β881Updated 9 months ago
- [ICLR 2025] CatVTON is a simple and efficient virtual try-on diffusion model with 1) Lightweight Network (899.06M parameters totally), 2)β¦β1,103Updated this week
- Memory-Guided Diffusion for Expressive Talking Video Generationβ688Updated this week
- InstantStyle: Free Lunch towards Style-Preserving in Text-to-Image Generation π₯β1,740Updated 4 months ago
- [CVPR 2024] PIA, your Personalized Image Animator. Animate your images by text prompt, combing with Dreambooth, achieving stunning videosβ¦β936Updated 5 months ago
- β460Updated last week
- ELLA: Equip Diffusion Models with LLM for Enhanced Semantic Alignmentβ1,131Updated 6 months ago
- StoryMaker: Towards consistent characters in text-to-image generationβ633Updated last month
- Fine-Grained Open Domain Image Animation with Motion Guidanceβ855Updated 3 months ago
- [ICML 2024] Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs (RPG)β1,741Updated last month
- Memory-optimized training scripts for video models based on Diffusersβ778Updated this week
- β1,321Updated this week
- [ECCV 2024] HiDiffusion: Increases the resolution and speed of your diffusion model by only adding a single line of code!β794Updated last month
- β570Updated 2 months ago
- [NeurIPS 2024] Official code for PuLID: Pure and Lightning ID Customization via Contrastive Alignmentβ3,028Updated 2 months ago
- MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoisingβ2,575Updated 7 months ago
- Taming Stable Diffusion for Lip Sync!β2,154Updated last week