Picsart-AI-Research / StreamingT2VLinks
[CVPR 2025] StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text
β1,604Updated 7 months ago
Alternatives and similar repositories for StreamingT2V
Users that are interested in StreamingT2V are comparing it to the libraries listed below
Sorting:
- πΊ An End-to-End Solution for High-Resolution and Long Video Generation Based on Transformer Diffusionβ2,227Updated 7 months ago
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priorsβ2,960Updated last year
- Lumina-T2X is a unified framework for Text to Any Modality Generationβ2,237Updated 8 months ago
- [AAAI 2025] Follow-Your-Click: This repo is the official implementation of "Follow-Your-Click: Open-domain Regional Image Animation via Sβ¦β905Updated last month
- [TMLR 2025] Latte: Latent Diffusion Transformer for Video Generation.β1,879Updated 6 months ago
- Controllable video and image Generation, SVD, Animate Anyone, ControlNet, ControlNeXt, LoRAβ1,618Updated last year
- Official Code for MotionCtrl [SIGGRAPH 2024]β1,463Updated 8 months ago
- High-Quality Human Motion Video Generation with Confidence-aware Pose Guidanceβ2,464Updated 3 months ago
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.β754Updated 10 months ago
- [CVPR 2024] PIA, your Personalized Image Animator. Animate your images by text prompt, combing with Dreambooth, achieving stunning videosβ¦β973Updated last year
- Fine-Grained Open Domain Image Animation with Motion Guidanceβ949Updated last year
- PixArt-Ξ£: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generationβ1,857Updated last year
- [IJCV 2024] LaVie: High-Quality Video Generation with Cascaded Latent Diffusion Modelsβ939Updated 11 months ago
- CogView4, CogView3-Plus and CogView3(ECCV 2024)β1,090Updated 7 months ago
- [ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Predictionβ944Updated 11 months ago
- MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generationβ2,614Updated 7 months ago
- MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoisingβ2,789Updated last year
- InstantStyle: Free Lunch towards Style-Preserving in Text-to-Image Generation π₯β1,973Updated last year
- πΉ A more flexible framework that can generate videos at any resolution and creates videos from images.β1,502Updated this week
- Official implementation of "MIMO: Controllable Character Video Synthesis with Spatial Decomposed Modeling"β1,554Updated 4 months ago
- SEED-Story: Multimodal Long Story Generation with Large Language Modelβ872Updated last year
- Official repo for VGen: a holistic video generation ecosystem for video generation building on diffusion modelsβ3,144Updated 9 months ago
- Official repository of In-Context LoRA for Diffusion Transformersβ2,026Updated 10 months ago
- VideoSys: An easy and efficient system for video generationβ2,003Updated 2 months ago
- Official implementations for paper: Zero-shot Image Editing with Reference Imitationβ1,300Updated last year
- [ACM MM 2024] This is the official code for "AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion β¦β1,593Updated last year
- [CVPR 2024] FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translationβ778Updated last year
- [ICML 2024] MagicPose(also known as MagicDance): Realistic Human Poses and Facial Expressions Retargeting with Identity-aware Diffusionβ769Updated last year
- Character Animation (AnimateAnyone, Face Reenactment)β3,444Updated last year
- [ICCV 2025 Highlight] OminiControl: Minimal and Universal Control for Diffusion Transformerβ1,810Updated 4 months ago