JiePKU / MoLELinks
An official pytorch implementation of "MoLE: Enhancing Human-centric Text-to-image Diffusion via Mixture of Low-rank Experts"
☆32Updated 7 months ago
Alternatives and similar repositories for MoLE
Users that are interested in MoLE are comparing it to the libraries listed below
Sorting:
- Magic Mirror: ID-Preserved Video Generation in Video Diffusion Transformers☆117Updated 5 months ago
- T2VScore: Towards A Better Metric for Text-to-Video Generation☆80Updated last year
- [CVPR2024] The official implementation of paper Relation Rectification in Diffusion Model☆48Updated 9 months ago
- Reuse and Diffuse: Iterative Denoising for Text-to-Video Generation☆38Updated last year
- [ NeurIPS 2024 D&B Track ] Implementation for "FiVA: Fine-grained Visual Attribute Dataset for Text-to-Image Diffusion Models"☆70Updated 5 months ago
- Official code for CustAny: Customizing Anything from A Single Example. Accepted by CVPR2025 (Oral)☆46Updated 2 months ago
- Official implementation of MARS: Mixture of Auto-Regressive Models for Fine-grained Text-to-image Synthesis☆85Updated 11 months ago
- Concat-ID: Towards Universal Identity-Preserving Video Synthesis☆52Updated last month
- Official implementation for "LOVECon: Text-driven Training-free Long Video Editing with ControlNet"☆40Updated last year
- [NeurIPS 2024] Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation☆64Updated 7 months ago
- FlowZero: Zero-Shot Text-to-Video Synthesis with LLM-Driven Dynamic Scene Syntax☆18Updated last year
- Code for ICLR 2024 paper "Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators"☆103Updated last year
- Official GitHub repository for the Text-Guided Video Editing (TGVE) competition of LOVEU Workshop @ CVPR'23.☆76Updated last year
- 🏞️ Official implementation of "Gen4Gen: Generative Data Pipeline for Generative Multi-Concept Composition"☆107Updated last year
- ☆64Updated 10 months ago
- TIP-I2V: A Million-Scale Real Text and Image Prompt Dataset for Image-to-Video Generation☆30Updated 6 months ago
- Implementation code of the paper MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and Editing☆64Updated 2 weeks ago
- ☆50Updated 6 months ago
- Official implementation of the paper "MotionCrafter: One-Shot Motion Customization of Diffusion Models"☆27Updated last year
- [NeurIPS 2024] EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models.☆49Updated 8 months ago
- Benchmark dataset and code of MSRVTT-Personalization☆38Updated 3 months ago
- Code for Paper 'Redefining Temporal Modeling in Video Diffusion: The Vectorized Timestep Approach'☆21Updated 8 months ago
- ☆42Updated 3 months ago
- [ICCV 2023 Oral, Best Paper Finalist] ITI-GEN: Inclusive Text-to-Image Generation☆67Updated last year
- Code Release for the paper "Make-A-Story: Visual Memory Conditioned Consistent Story Generation" in CVPR 2023☆39Updated last year
- EasyRef: Omni-Generalized Group Image Reference for Diffusion Models via Multimodal LLM☆59Updated 2 months ago
- [ICLR2025] ClassDiffusion: Official impl. of Paper "ClassDiffusion: More Aligned Personalization Tuning with Explicit Class Guidance"☆41Updated 3 months ago
- [NeurIPS 2024] RealCompo: Balancing Realism and Compositionality Improves Text-to-Image Diffusion Models☆117Updated 7 months ago
- [ICLR 2024] Contextualized Diffusion Models for Text-Guided Image and Video Generation☆68Updated last year
- [ICLR 2025] HQ-Edit: A High-Quality and High-Coverage Dataset for General Image Editing☆102Updated last year