Vchitect / SEINE
[ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative Transition and Prediction
☆887Updated 8 months ago
Related projects: ⓘ
- LaVie: High-Quality Video Generation with Cascaded Latent Diffusion Models☆835Updated 3 weeks ago
- Official Code for MotionCtrl [SIGGRAPH 2024]☆1,263Updated last month
- [CVPR 2024] PIA, your Personalized Image Animator. Animate your images by text prompt, combing with Dreambooth, achieving stunning videos…☆876Updated last month
- Fine-Grained Open Domain Image Animation with Motion Guidance☆716Updated last month
- [ECCV 2024] OMG: Occlusion-friendly Personalized Multi-concept Generation In Diffusion Models☆614Updated 2 months ago
- Concept Sliders for Precise Control of Diffusion Models☆922Updated last week
- ☆730Updated 7 months ago
- [ECCV 2024 Oral] MotionDirector: Motion Customization of Text-to-Video Diffusion Models.☆793Updated 3 weeks ago
- ELLA: Equip Diffusion Models with LLM for Enhanced Semantic Alignment☆1,048Updated 2 months ago
- Code and data for "AnyV2V: A Tuning-Free Framework For Any Video-to-Video Editing Tasks"☆453Updated 3 weeks ago
- AnimateLCM: Let's Accelerate the Video Generation within 4 Steps!☆570Updated last month
- [ICML 2024] MagicPose(also known as MagicDance): Realistic Human Poses and Facial Expressions Retargeting with Identity-aware Diffusion☆674Updated 2 months ago
- [ECCV 2024] FreeInit: Bridging Initialization Gap in Video Diffusion Models☆476Updated 8 months ago
- [CVPR 2024] FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation☆711Updated 3 months ago
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.☆576Updated last month
- Official repo for VideoComposer: Compositional Video Synthesis with Motion Controllability☆887Updated 10 months ago
- Official code for the paper "StreamMultiDiffusion: Real-Time Interactive Generation with Region-Based Semantic Control."☆522Updated 3 weeks ago
- ✨ Hotshot-XL: State-of-the-art AI text-to-GIF model trained to work alongside Stable Diffusion XL☆1,037Updated 7 months ago
- ☆740Updated 3 months ago
- Controllable video and image Generation, SVD, Animate Anyone, ControlNet, ControlNeXt, LoRA☆1,255Updated last week
- [ECCV 2024] HiDiffusion: Increases the resolution and speed of your diffusion model by only adding a single line of code!☆735Updated last month
- [ICLR 2024] Official pytorch implementation of "ControlVideo: Training-free Controllable Text-to-Video Generation"☆758Updated 11 months ago
- ICLR 2024 (Spotlight)☆712Updated 6 months ago
- Finetune ModelScope's Text To Video model using Diffusers 🧨☆657Updated 9 months ago
- Official PyTorch implementation for the paper "AnimateZero: Video Diffusion Models are Zero-Shot Image Animators"☆347Updated 9 months ago
- Official implementation of "DreamPose: Fashion Image-to-Video Synthesis via Stable Diffusion"☆960Updated 10 months ago
- MagicAvatar: Multimodal Avatar Generation and Animation☆619Updated last year
- FreeU: Free Lunch in Diffusion U-Net (CVPR2024 Oral)☆1,683Updated 3 months ago
- Official code for "Style Aligned Image Generation via Shared Attention"☆1,195Updated 8 months ago
- [ICML 2024] Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs (PRG)☆1,651Updated last week