RedAIGC / StoryMakerLinks
StoryMaker: Towards consistent characters in text-to-image generation
☆703Updated 8 months ago
Alternatives and similar repositories for StoryMaker
Users that are interested in StoryMaker are comparing it to the libraries listed below
Sorting:
- [ECCV2024] This is an official inference code of the paper "Glyph-ByT5: A Customized Text Encoder for Accurate Visual Text Rendering" and…☆599Updated 2 months ago
- AutoStudio: Crafting Consistent Subjects in Multi-turn Interactive Image Generation☆445Updated 3 months ago
- ☆591Updated last week
- [ECCV 2024] OMG: Occlusion-friendly Personalized Multi-concept Generation In Diffusion Models☆692Updated last year
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.☆751Updated 7 months ago
- Official implementation of "FitDiT: Advancing the Authentic Garment Details for High-fidelity Virtual Try-on"☆575Updated 5 months ago
- Pytorch Implementation of: "Stable-Hair: Real-World Hair Transfer via Diffusion Model" (AAAI 2025)☆496Updated 4 months ago
- [TPAMI under review] The official implementation of paper "BrushEdit: All-In-One Image Inpainting and Editing"☆572Updated 7 months ago
- [ICLR2025] DisPose: Disentangling Pose Guidance for Controllable Human Image Animation☆369Updated 6 months ago
- [ICCV 2025] Code Implementation of "ArtEditor: Learning Customized Instructional Image Editor from Few-Shot Examples"☆413Updated 3 months ago
- [CVPR 2024] FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation☆771Updated last year
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers☆554Updated last month
- ☆747Updated 8 months ago
- ☆428Updated 10 months ago
- [CVPR 2025] Official implementation of "AnyDressing: Customizable Multi-Garment Virtual Dressing via Latent Diffusion Models"☆324Updated 3 months ago
- SEED-Story: Multimodal Long Story Generation with Large Language Model☆857Updated 9 months ago
- ☆520Updated 6 months ago
- MagicTryOn is a video virtual try-on framework based on a large-scale video diffusion Transformer.☆399Updated 2 weeks ago
- All-round Creator and Editor☆229Updated 6 months ago
- 🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation Using a Single Prompt☆278Updated 2 months ago
- Official Pytorch implementation of StreamV2V.☆505Updated 5 months ago
- Code and data for "AnyV2V: A Tuning-Free Framework For Any Video-to-Video Editing Tasks" [TMLR 2024]☆602Updated 9 months ago
- DesignEdit: Unify Spatial-Aware Image Editing via Training-free Inpainting with a Multi-Layered Latent Diffusion Framework☆348Updated 7 months ago
- [Siggraph Asia 2024] Follow-Your-Emoji: This repo is the official implementation of "Follow-Your-Emoji: Fine-Controllable and Expressive …☆413Updated 3 months ago
- ViViD: Video Virtual Try-on using Diffusion Models☆542Updated last year
- Diffusion-Tryon-Trainer☆146Updated last year
- [ICCV2025] MotionFollower: Editing Video Motion via Lightweight Score-Guided Diffusion☆228Updated last month
- [ICLR 2025] Animate-X - PyTorch Implementation☆304Updated 6 months ago
- ☆381Updated last year
- [ECCV 2024] HiDiffusion: Increases the resolution and speed of your diffusion model by only adding a single line of code!☆825Updated 8 months ago