xizaoqu / MOFTLinks
[Neurips 2024] Video Diffusion Models are Training-free Motion Interpreter and Controller
☆49Updated 4 months ago
Alternatives and similar repositories for MOFT
Users that are interested in MOFT are comparing it to the libraries listed below
Sorting:
- Official PyTorch implementation - Video Motion Transfer with Diffusion Transformers☆76Updated 4 months ago
- [CVPR'25 - Rating 555] Official PyTorch implementation of Lumos: Learning Visual Generative Priors without Text☆53Updated 9 months ago
- Benchmark dataset and code of MSRVTT-Personalization☆53Updated last month
- Code for FreeTraj, a tuning-free method for trajectory-controllable video generation☆108Updated 3 months ago
- Official PyTorch implementation of DiffMoE, TC-DiT, EC-DiT and Dense DiT☆155Updated 2 months ago
- official code repo of CVPR 2025 paper PhyT2V: LLM-Guided Iterative Self-Refinement for Physics-Grounded Text-to-Video Generation☆56Updated 4 months ago
- [ICLR 2025] Trajectory Attention For Fine-grained Video Motion Control☆95Updated 7 months ago
- Diffusion Powers Video Tokenizer for Comprehension and Generation (CVPR 2025)☆85Updated 9 months ago
- The official implementation of paper “VChain: Chain-of-Visual-Thought for Reasoning in Video Generation”☆109Updated 2 months ago
- Official Implementation of VideoDPO☆151Updated 6 months ago
- [CVPR 2024] BIVDiff: A Training-free Framework for General-Purpose Video Synthesis via Bridging Image and Video Diffusion Models☆75Updated last year
- ☆51Updated last year
- Official implementation of LiFT: Leveraging Human Feedback for Text-to-Video Model Alignment.☆84Updated 7 months ago
- Training-free Guidance in Text-to-Video Generation via Multimodal Planning and Structured Noise Initialization☆24Updated 8 months ago
- Code release for "PISA Experiments: Exploring Physics Post-Training for Video Diffusion Models by Watching Stuff Drop" (ICML 2025)☆51Updated 7 months ago
- Official PyTorch Implementation of "Latent Denoising Makes Good Visual Tokenizers"☆164Updated 2 months ago
- [NeurIPS 2025] VideoREPA: Learning Physics for Video Generation through Relational Alignment with Foundation Models☆135Updated last month
- ☆90Updated last year
- Video-GPT via Next Clip Diffusion.☆43Updated 6 months ago
- [NeurIPS 2024] Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation☆70Updated last year
- ☆34Updated last year
- [arXiv 2024] I4VGen: Image as Free Stepping Stone for Text-to-Video Generation☆24Updated last year
- CVPRW 2025 paper Progressive Autoregressive Video Diffusion Models: https://arxiv.org/abs/2410.08151☆87Updated 7 months ago
- [ICCV2025] VEGGIE: Instructional Editing and Reasoning Video Concepts with Grounded Generation☆29Updated 4 months ago
- VideoAuteur: Towards Long Narrative Video Generation☆43Updated 2 months ago
- ☆47Updated 8 months ago
- ☆67Updated 4 months ago
- Training-Free Condition-Guided Text-to-Video Generation☆61Updated 2 months ago
- Official source codes of "TweedieMix: Improving Multi-Concept Fusion for Diffusion-based Image/Video Generation" (ICLR 2025)☆59Updated 11 months ago
- Phantom-Data: Towards a General Subject-Consistent Video Generation Dataset☆99Updated last month