JiePKU / MoLELinks
An official pytorch implementation of "MoLE: Enhancing Human-centric Text-to-image Diffusion via Mixture of Low-rank Experts"
☆33Updated 8 months ago
Alternatives and similar repositories for MoLE
Users that are interested in MoLE are comparing it to the libraries listed below
Sorting:
- [ICCV 2025] MagicMirror: ID-Preserved Video Generation in Video Diffusion Transformers☆119Updated last month
- [NeurIPS 2024] Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation☆66Updated 9 months ago
- Code for FreeTraj, a tuning-free method for trajectory-controllable video generation☆104Updated last year
- Official code for CustAny: Customizing Anything from A Single Example. Accepted by CVPR2025 (Oral)☆46Updated 3 months ago
- Benchmark dataset and code of MSRVTT-Personalization☆44Updated last month
- Code for ICLR 2024 paper "Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators"☆103Updated last year
- [CVPR2024] CapHuman: Capture Your Moments in Parallel Universes☆97Updated 8 months ago
- Official implementation of the paper "MotionCrafter: One-Shot Motion Customization of Diffusion Models"☆28Updated last year
- Reuse and Diffuse: Iterative Denoising for Text-to-Video Generation☆38Updated last year
- [NeurIPS 2024 Spotlight] The official implement of research paper "MotionBooth: Motion-Aware Customized Text-to-Video Generation"☆136Updated 9 months ago
- [ICML 2025] EasyRef: Omni-Generalized Group Image Reference for Diffusion Models via Multimodal LLM☆62Updated 3 weeks ago
- Official implementation for "LOVECon: Text-driven Training-free Long Video Editing with ControlNet"☆41Updated last year
- Concat-ID: Towards Universal Identity-Preserving Video Synthesis☆56Updated 3 months ago
- ☆66Updated 11 months ago
- ☆29Updated last year
- [AAAI-2025] Official implementation of Image Conductor: Precision Control for Interactive Video Synthesis☆93Updated last year
- T2VScore: Towards A Better Metric for Text-to-Video Generation☆80Updated last year
- Subjects200K dataset☆114Updated 6 months ago
- [CVPR2024] Official code for Drag Your Noise: Interactive Point-based Editing via Diffusion Semantic Propagation☆87Updated last year
- Interactive Video Generation via Masked-Diffusion☆84Updated last year
- Implementation code of the paper MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and Editing☆65Updated 3 weeks ago
- FlowZero: Zero-Shot Text-to-Video Synthesis with LLM-Driven Dynamic Scene Syntax☆18Updated last year
- EditWorld: Simulating World Dynamics for Instruction-Following Image Editing☆131Updated last year
- Official implementation of MARS: Mixture of Auto-Regressive Models for Fine-grained Text-to-image Synthesis☆85Updated last year
- Official GitHub repository for the Text-Guided Video Editing (TGVE) competition of LOVEU Workshop @ CVPR'23.☆76Updated last year
- [CVPR'25] StyleMaster: Stylize Your Video with Artistic Generation and Translation☆129Updated 2 weeks ago
- Text-conditioned image-to-video generation based on diffusion models.☆53Updated last year
- [SIGGRAPH Asia 2024] TrailBlazer: Trajectory Control for Diffusion-Based Video Generation☆100Updated last year
- [arXiv'25] AnyCharV: Bootstrap Controllable Character Video Generation with Fine-to-Coarse Guidance☆39Updated 5 months ago
- [ NeurIPS 2024 D&B Track ] Implementation for "FiVA: Fine-grained Visual Attribute Dataset for Text-to-Image Diffusion Models"☆70Updated 7 months ago