JiePKU / MoLELinks
An official pytorch implementation of "MoLE: Enhancing Human-centric Text-to-image Diffusion via Mixture of Low-rank Experts"
☆33Updated 9 months ago
Alternatives and similar repositories for MoLE
Users that are interested in MoLE are comparing it to the libraries listed below
Sorting:
- [ICCV 2025] MagicMirror: ID-Preserved Video Generation in Video Diffusion Transformers☆122Updated 2 months ago
- Benchmark dataset and code of MSRVTT-Personalization☆44Updated 2 months ago
- FlowZero: Zero-Shot Text-to-Video Synthesis with LLM-Driven Dynamic Scene Syntax☆18Updated last year
- [NeurIPS 2024 Spotlight] The official implement of research paper "MotionBooth: Motion-Aware Customized Text-to-Video Generation"☆137Updated 10 months ago
- Reuse and Diffuse: Iterative Denoising for Text-to-Video Generation☆38Updated last year
- [NeurIPS 2024] Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation☆67Updated 10 months ago
- [AAAI-2025] Official implementation of Image Conductor: Precision Control for Interactive Video Synthesis☆95Updated last year
- Implementation code of the paper MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and Editing☆65Updated last month
- Code for FreeTraj, a tuning-free method for trajectory-controllable video generation☆104Updated last year
- Official code for CustAny: Customizing Anything from A Single Example. Accepted by CVPR2025 (Oral)☆47Updated 4 months ago
- ☆50Updated 8 months ago
- [CVPR2024] Official code for Drag Your Noise: Interactive Point-based Editing via Diffusion Semantic Propagation☆87Updated last year
- Concat-ID: Towards Universal Identity-Preserving Video Synthesis☆57Updated 3 months ago
- [ICML 2025] EasyRef: Omni-Generalized Group Image Reference for Diffusion Models via Multimodal LLM☆64Updated last month
- Text-conditioned image-to-video generation based on diffusion models.☆53Updated last year
- Official implementation of MARS: Mixture of Auto-Regressive Models for Fine-grained Text-to-image Synthesis☆85Updated last year
- Official implementation for "LOVECon: Text-driven Training-free Long Video Editing with ControlNet"☆42Updated last year
- [CVPR'25] StyleMaster: Stylize Your Video with Artistic Generation and Translation☆132Updated last month
- Official GitHub repository for the Text-Guided Video Editing (TGVE) competition of LOVEU Workshop @ CVPR'23.☆76Updated last year
- [ NeurIPS 2024 D&B Track ] Implementation for "FiVA: Fine-grained Visual Attribute Dataset for Text-to-Image Diffusion Models"☆70Updated 8 months ago
- Interactive Video Generation via Masked-Diffusion☆84Updated last year
- Code for ICLR 2024 paper "Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators"☆104Updated last year
- Official implementation of the paper "MotionCrafter: One-Shot Motion Customization of Diffusion Models"☆28Updated last year
- Subjects200K dataset☆117Updated 7 months ago
- [CVPR2024] The official implementation of paper Relation Rectification in Diffusion Model☆48Updated 11 months ago
- The HD-VG-130M Dataset☆119Updated last year
- T2VScore: Towards A Better Metric for Text-to-Video Generation☆80Updated last year
- This repo contains the code for PreciseControl project [ECCV'24]☆64Updated 10 months ago
- [CVPR`2024, Oral] Attention Calibration for Disentangled Text-to-Image Personalization☆104Updated last year
- ☆66Updated last year