Picsart-AI-Research / StreamingT2VLinks
[CVPR 2025] StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text
β1,595Updated 5 months ago
Alternatives and similar repositories for StreamingT2V
Users that are interested in StreamingT2V are comparing it to the libraries listed below
Sorting:
- πΊ An End-to-End Solution for High-Resolution and Long Video Generation Based on Transformer Diffusionβ2,196Updated 5 months ago
- [TMLR 2025] Latte: Latent Diffusion Transformer for Video Generation.β1,864Updated 4 months ago
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priorsβ2,930Updated 11 months ago
- [AAAI 2025] Follow-Your-Click: This repo is the official implementation of "Follow-Your-Click: Open-domain Regional Image Animation via Sβ¦β901Updated 2 weeks ago
- Fine-Grained Open Domain Image Animation with Motion Guidanceβ943Updated 10 months ago
- Controllable video and image Generation, SVD, Animate Anyone, ControlNet, ControlNeXt, LoRAβ1,605Updated 11 months ago
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.β751Updated 9 months ago
- Lumina-T2X is a unified framework for Text to Any Modality Generationβ2,219Updated 6 months ago
- [CVPR 2024] PIA, your Personalized Image Animator. Animate your images by text prompt, combing with Dreambooth, achieving stunning videosβ¦β970Updated last year
- Official Code for MotionCtrl [SIGGRAPH 2024]β1,458Updated 6 months ago
- [IJCV 2024] LaVie: High-Quality Video Generation with Cascaded Latent Diffusion Modelsβ937Updated 9 months ago
- MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoisingβ2,771Updated last year
- High-Quality Human Motion Video Generation with Confidence-aware Pose Guidanceβ2,444Updated last month
- [ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Predictionβ941Updated 9 months ago
- VideoSys: An easy and efficient system for video generationβ1,997Updated last week
- Official repo for VGen: a holistic video generation ecosystem for video generation building on diffusion modelsβ3,131Updated 7 months ago
- πΉ A more flexible framework that can generate videos at any resolution and creates videos from images.β1,350Updated last week
- Official implementations for paper: Zero-shot Image Editing with Reference Imitationβ1,293Updated last year
- MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generationβ2,592Updated 6 months ago
- [ICML 2024] Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs (RPG)β1,821Updated 7 months ago
- [ECCV 2024] OMG: Occlusion-friendly Personalized Multi-concept Generation In Diffusion Modelsβ693Updated last year
- PixArt-Ξ£: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generationβ1,834Updated 10 months ago
- [CVPR 2024] FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translationβ773Updated last year
- SEED-Story: Multimodal Long Story Generation with Large Language Modelβ867Updated 10 months ago
- InstantStyle: Free Lunch towards Style-Preserving in Text-to-Image Generation π₯β1,960Updated 11 months ago
- A SOTA open-source image editing model, which aims to provide comparable performance against the closed-source models like GPT-4o and Gemβ¦β1,603Updated last month
- [ICCV 2025 Highlight] OminiControl: Minimal and Universal Control for Diffusion Transformerβ1,755Updated 2 months ago
- Mora: More like Sora for Generalist Video Generationβ1,567Updated 10 months ago
- CogView4, CogView3-Plus and CogView3(ECCV 2024)β1,087Updated 5 months ago
- [ACM MM 2024] This is the official code for "AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion β¦β1,586Updated last year