aigc-apps / EasyAnimateLinks
πΊ An End-to-End Solution for High-Resolution and Long Video Generation Based on Transformer Diffusion
β2,190Updated 5 months ago
Alternatives and similar repositories for EasyAnimate
Users that are interested in EasyAnimate are comparing it to the libraries listed below
Sorting:
- [CVPR 2025] StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Textβ1,586Updated 4 months ago
- πΉ A more flexible framework that can generate videos at any resolution and creates videos from images.β1,248Updated this week
- [ICCV 2025 Highlight] OminiControl: Minimal and Universal Control for Diffusion Transformerβ1,722Updated last month
- Official repository of In-Context LoRA for Diffusion Transformersβ1,980Updated 7 months ago
- InstantStyle: Free Lunch towards Style-Preserving in Text-to-Image Generation π₯β1,955Updated 10 months ago
- Controllable video and image Generation, SVD, Animate Anyone, ControlNet, ControlNeXt, LoRAβ1,600Updated 10 months ago
- High-Quality Human Motion Video Generation with Confidence-aware Pose Guidanceβ2,426Updated last week
- [ICCV 2025] π₯π₯ UNO: A Universal Customization Method for Both Single and Multi-Subject Conditioningβ1,184Updated 3 months ago
- HunyuanVideo-I2V: A Customizable Image-to-Video Model based on HunyuanVideoβ1,603Updated 2 months ago
- A SOTA open-source image editing model, which aims to provide comparable performance against the closed-source models like GPT-4o and Gemβ¦β1,565Updated last week
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priorsβ2,916Updated 10 months ago
- Lumina-T2X is a unified framework for Text to Any Modality Generationβ2,210Updated 5 months ago
- Phantom: Subject-Consistent Video Generation via Cross-Modal Alignmentβ1,337Updated last month
- [ICLR 2025] CatVTON is a simple and efficient virtual try-on diffusion model with 1) Lightweight Network (899.06M parameters totally), 2)β¦β1,462Updated 5 months ago
- MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generationβ2,578Updated 5 months ago
- β1,015Updated 2 months ago
- A pipeline parallel training script for diffusion models.β1,347Updated this week
- [NeurIPS 2024] Official code for PuLID: Pure and Lightning ID Customization via Contrastive Alignmentβ3,438Updated last week
- PixArt-Ξ£: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generationβ1,824Updated 9 months ago
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.β751Updated 8 months ago
- SkyReels V1: The first and most advanced open-source human-centric video foundation modelβ2,242Updated 4 months ago
- Image editing is worth a single LoRA! 0.1% training data for fantastic image editing! Training released! Surpasses GPT-4o in ID persistenβ¦β1,865Updated 2 months ago
- CogView4, CogView3-Plus and CogView3(ECCV 2024)β1,080Updated 4 months ago
- Official implementations for paper: VACE: All-in-One Video Creation and Editingβ3,052Updated 2 months ago
- β2,157Updated 9 months ago
- Fine-Grained Open Domain Image Animation with Motion Guidanceβ937Updated 9 months ago
- β1,254Updated 3 months ago
- [AAAI 2025]πIMAGDressingπ: Interactive Modular Apparel Generation for Virtual Dressing. It enables customizable human image generation β¦β1,271Updated 2 months ago
- [ACM MM 2025] FantasyTalking: Realistic Talking Portrait Generation via Coherent Motion Synthesisβ1,472Updated 2 weeks ago
- Scalable and memory-optimized training of diffusion modelsβ1,243Updated 2 months ago