jianzongwu / DiffSensei
Implementation of [CVPR 2025] "DiffSensei: Bridging Multi-Modal LLMs and Diffusion Models for Customized Manga Generation"
β769Updated 2 months ago
Alternatives and similar repositories for DiffSensei:
Users that are interested in DiffSensei are comparing it to the libraries listed below
- [CVPR'25] Official Implementations for Paper - AniDoc: Animation Creation Made Easierβ507Updated last week
- π₯π₯ UNO: A Universal Customization Method for Both Single and Multi-Subject Conditioningβ775Updated this week
- β727Updated 2 months ago
- [CVPR 2025 Highlight] Official implementation of "MangaNinja: Line Art Colorization with Precise Reference Following"β586Updated last month
- StoryMaker: Towards consistent characters in text-to-image generationβ688Updated 4 months ago
- πΉ A more flexible framework that can generate videos at any resolution and creates videos from images.β914Updated this week
- Lumina-Image 2.0: A Unified and Efficient Image Generative Frameworkβ642Updated 3 weeks ago
- [TPAMI under review] The official implementation of paper "BrushEdit: All-In-One Image Inpainting and Editing"β549Updated 3 months ago
- Official repository of In-Context LoRA for Diffusion Transformersβ1,809Updated 4 months ago
- Official implementation of OneDiffusion paper (CVPR 2025)β623Updated 4 months ago
- β306Updated this week
- AnimeGamer: Infinite Anime Life Simulation with Next Game State Predictionβ284Updated last week
- A minimal and universal controller for FLUX.1.β1,485Updated last week
- [768 Resolution] [Any "SDXL" Model] [Various Conditions] [Arbitrary Views] Official impl. of "MV-Adapter: Multi-view Consistent Image Genβ¦β860Updated 3 weeks ago
- π₯ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation Using a Single Promptβ246Updated 2 weeks ago
- Phantom: Subject-Consistent Video Generation via Cross-Modal Alignmentβ547Updated last month
- Code Implementation of "PhotoDoodle: Learning Artistic Image Editing from Few-Shot Pairwise Data"β377Updated last month
- [ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.β736Updated 4 months ago
- β516Updated 3 months ago
- β279Updated last month
- Memory-Guided Diffusion for Expressive Talking Video Generationβ793Updated 2 months ago
- HunyuanVideo-I2V: A Customizable Image-to-Video Model based on HunyuanVideoβ1,325Updated this week
- FlashVideo: Flowing Fidelity to Detail for Efficient High-Resolution Video Generationβ421Updated last month
- SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformersβ475Updated this week
- Official implementations for paper: VACE: All-in-One Video Creation and Editingβ1,338Updated 2 weeks ago
- Light-A-Video: Training-free Video Relighting via Progressive Light Fusionβ407Updated last week
- CVPR2025β842Updated last month
- Illumination Drawing Tools for Text-to-Image Diffusion Modelsβ719Updated 4 months ago
- Any-length Video Inpainting and Editing with Plug-and-Play Context Controlβ334Updated 2 weeks ago
- [ICLR'25] Official PyTorch implementation of "Framer: Interactive Frame Interpolation".β464Updated 3 months ago