yhZhai / mcmLinks
[NeurIPS 2024] Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation
☆64Updated 7 months ago
Alternatives and similar repositories for mcm
Users that are interested in mcm are comparing it to the libraries listed below
Sorting:
- [CVPR`2024, Oral] Attention Calibration for Disentangled Text-to-Image Personalization☆103Updated last year
- Code for FreeTraj, a tuning-free method for trajectory-controllable video generation☆104Updated 11 months ago
- Official implementation for "LOVECon: Text-driven Training-free Long Video Editing with ControlNet"☆40Updated last year
- official repo for "VideoScore: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation" [EMNLP2024]☆91Updated 4 months ago
- Official GitHub repository for the Text-Guided Video Editing (TGVE) competition of LOVEU Workshop @ CVPR'23.☆76Updated last year
- [NeurIPS 2024 Spotlight] The official implement of research paper "MotionBooth: Motion-Aware Customized Text-to-Video Generation"☆133Updated 8 months ago
- Official source codes of "TweedieMix: Improving Multi-Concept Fusion for Diffusion-based Image/Video Generation" (ICLR 2025)☆49Updated 5 months ago
- [NeurIPS 2024] EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models.☆49Updated 8 months ago
- T2VScore: Towards A Better Metric for Text-to-Video Generation☆80Updated last year
- Official PyTorch implementation - Video Motion Transfer with Diffusion Transformers☆60Updated 2 months ago
- [CVPR 2024] InitNO: Boosting Text-to-Image Diffusion Models via Initial Noise Optimization☆63Updated last year
- [CVPR 2024] Official PyTorch implementation of FreeCustom: Tuning-Free Customized Image Generation for Multi-Concept Composition☆157Updated 4 months ago
- Magic Mirror: ID-Preserved Video Generation in Video Diffusion Transformers☆117Updated 5 months ago
- Code for ICLR 2024 paper "Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators"☆103Updated last year
- [NeurIPS 2023] Free-Bloom: Zero-Shot Text-to-Video Generator with LLM Director and LDM Animator☆94Updated last year
- ICCV2023-Diffusion-Papers☆108Updated last year
- Official implementation of MARS: Mixture of Auto-Regressive Models for Fine-grained Text-to-image Synthesis☆85Updated 11 months ago
- Training-Free Condition-Guided Text-to-Video Generation☆61Updated 2 months ago
- The official repository of "Spectral Motion Alignment for Video Motion Transfer using Diffusion Models".☆27Updated 6 months ago
- Reuse and Diffuse: Iterative Denoising for Text-to-Video Generation☆38Updated last year
- LoRA-Composer: Leveraging Low-Rank Adaptation for Multi-Concept Customization in Training-Free Diffusion Models☆63Updated 10 months ago
- [ NeurIPS 2024 D&B Track ] Implementation for "FiVA: Fine-grained Visual Attribute Dataset for Text-to-Image Diffusion Models"☆70Updated 5 months ago
- [Neurips 2024] Video Diffusion Models are Training-free Motion Interpreter and Controller☆41Updated 2 months ago
- ☆78Updated last year
- Reflect-DiT: Inference-Time Scaling for Text-to-Image Diffusion Transformers via In-Context Reflection☆32Updated 2 months ago
- CVPRW 2025 paper Progressive Autoregressive Video Diffusion Models: https://arxiv.org/abs/2410.08151☆72Updated last month
- [CVPR 2024] On the Content Bias in Fréchet Video Distance☆117Updated 8 months ago
- Implementation code of the paper MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and Editing☆64Updated 2 weeks ago
- Video-GPT via Next Clip Diffusion.☆36Updated 3 weeks ago
- 🏞️ Official implementation of "Gen4Gen: Generative Data Pipeline for Generative Multi-Concept Composition"☆107Updated last year