Yaofang-Liu / Mochi-Full-FinetunerLinks
Code for full fintuing Mochi model with FSDP (and CP)
☆30Updated last month
Alternatives and similar repositories for Mochi-Full-Finetuner
Users that are interested in Mochi-Full-Finetuner are comparing it to the libraries listed below
Sorting:
- [WACV 2025] MegaFusion: Extend Diffusion Models towards Higher-resolution Image Generation without Further Tuning☆95Updated 4 months ago
- ☆63Updated last year
- Unofficial extension implementation of CausVid☆50Updated 4 months ago
- [ICLR 2024] Code for FreeNoise based on AnimateDiff☆107Updated last year
- ☆30Updated 5 months ago
- Official code of "LayerTracer: Cognitive-Aligned Layered SVG Synthesis via Diffusion Transformer"☆66Updated 5 months ago
- ☆46Updated last month
- AAAI 2025: Anywhere: A Multi-Agent Framework for User-Guided, Reliable, and Diverse Foreground-Conditioned Image Generation☆43Updated last year
- Collection of scripts to build small-scale datasets for fine-tuning video generation models.☆65Updated 5 months ago
- This respository contains the code for the NeurIPS 2024 paper SF-V: Single Forward Video Generation Model.☆98Updated 9 months ago
- Blending Custom Photos with Video Diffusion Transformers☆47Updated 7 months ago
- ☆66Updated last year
- Code for Paper 'Redefining Temporal Modeling in Video Diffusion: The Vectorized Timestep Approach'☆34Updated 10 months ago
- [ECCV 2024] HumanRefiner: Benchmarking Abnormal Human Generation and Refining with Coarse-to-fine Pose-Reversible Guidance☆47Updated 10 months ago
- ☆86Updated last year
- Omegance: A Single Parameter for Various Granularities in Diffusion-Based Synthesis (ICCV, 2025)☆52Updated 2 months ago
- [ICML 2025] Official PyTorch implementation of paper "Ultra-Resolution Adaptation with Ease".☆104Updated 4 months ago
- An official implementation of EvoSearch: Scaling Image and Video Generation via Test-Time Evolutionary Search☆88Updated 2 weeks ago
- ☆29Updated 5 months ago
- Fine-Grained Subject-Specific Attribute Expression Control in T2I Models☆128Updated 6 months ago
- [ACM MM24] Official implementation of ACM MM 2024 paper: "ZePo: Zero-Shot Portrait Stylization with Faster Sampling"☆41Updated last year
- [ICCV 2025] MagicMirror: ID-Preserved Video Generation in Video Diffusion Transformers☆122Updated 2 months ago
- This is the official repository for "LatentMan: Generating Consistent Animated Characters using Image Diffusion Models" [CVPRW 2024]☆22Updated last year
- Experiencing lightning fast (~1s) and accurate drag-based image editing☆80Updated 10 months ago
- Official implementation of ICCV 2025 paper - CharaConsist: Fine-Grained Consistent Character Generation☆121Updated last month
- Piece it Together: Part-Based Concepting with IP-Priors☆92Updated 4 months ago
- InstantUnify: Integrates Multimodal LLM into Diffusion Models 🔥☆40Updated last year
- [NeurIPS'2024] Invertible Consistency Distillation for Text-Guided Image Editing in Around 7 Steps☆99Updated last year
- [Arxiv 2024] Edicho: Consistent Image Editing in the Wild☆118Updated 7 months ago
- Concat-ID: Towards Universal Identity-Preserving Video Synthesis☆58Updated 3 months ago