HVision-NKU / StoryDiffusion
Accepted as [NeurIPS 2024] Spotlight Presentation Paper
☆5,977Updated last month
Related projects ⓘ
Alternatives and complementary repositories for StoryDiffusion
- AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation☆4,659Updated 4 months ago
- [ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors☆2,596Updated 2 months ago
- More relighting!☆5,582Updated 3 weeks ago
- V-Express aims to generate a talking head video under the control of a reference image, an audio, and a sequence of V-Kps images.☆2,258Updated last week
- Enjoy the magic of Diffusion models!☆6,598Updated this week
- Official repo for VGen: a holistic video generation ecosystem for video generation building on diffusion models☆2,974Updated last month
- Your image is almost there!☆7,337Updated 3 months ago
- MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoising☆2,465Updated 4 months ago
- [NeurIPS 2024] Official code for PuLID: Pure and Lightning ID Customization via Contrastive Alignment☆2,654Updated 3 weeks ago
- Character Animation (AnimateAnyone, Face Reenactment)☆3,190Updated 5 months ago
- Official implementations for paper: Anydoor: zero-shot object-level image customization☆4,007Updated 7 months ago
- MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation☆2,283Updated 3 months ago
- [CVPR 2024] MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model☆10,486Updated 5 months ago
- ☆4,586Updated 3 months ago
- Kolors Team☆3,885Updated last week
- Code of Pyramidal Flow Matching for Efficient Video Generative Modeling☆2,372Updated this week
- ☆2,375Updated 6 months ago
- Let us democratise high-resolution generation! (CVPR 2024)☆1,982Updated 7 months ago
- Lumina-T2X is a unified framework for Text to Any Modality Generation☆2,086Updated 3 months ago
- [ECCV2024] IDM-VTON : Improving Diffusion Models for Authentic Virtual Try-on in the Wild☆3,950Updated 2 weeks ago
- The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.☆5,292Updated 4 months ago
- Understand Human Behavior to Align True Needs☆3,490Updated 4 months ago
- Official implementation of AnimateDiff.☆10,612Updated 3 months ago
- StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text☆1,427Updated this week
- Official implementation of DreaMoving☆1,795Updated 10 months ago
- [SIGGRAPH Asia 2024, Journal Track] ToonCrafter: Generative Cartoon Interpolation☆5,370Updated 2 months ago
- [ACM MM 2024] This is the official code for "AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion …☆1,453Updated 3 months ago
- Open-Sora: Democratizing Efficient Video Production for All☆22,312Updated this week
- tiny vision language model☆5,798Updated this week
- Hunyuan-DiT : A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding☆3,465Updated last month