mayuelala / FollowYourMotionLinks
☆27Updated 2 months ago
Alternatives and similar repositories for FollowYourMotion
Users that are interested in FollowYourMotion are comparing it to the libraries listed below
Sorting:
- [Neurips 2024] Video Diffusion Models are Training-free Motion Interpreter and Controller☆45Updated 3 weeks ago
- official code repo of CVPR 2025 paper PhyT2V: LLM-Guided Iterative Self-Refinement for Physics-Grounded Text-to-Video Generation☆43Updated last month
- Official Implementation of VideoGen-of-Thought: Step-by-step generating multi-shot video with minimal manual intervention☆39Updated 4 months ago
- PyTorch implementation of DiffMoE, TC-DiT, EC-DiT and Dense DiT☆123Updated 4 months ago
- Official implementation of LiFT: Leveraging Human Feedback for Text-to-Video Model Alignment.☆80Updated 3 months ago
- Official PyTorch implementation - Video Motion Transfer with Diffusion Transformers☆71Updated last month
- Benchmark dataset and code of MSRVTT-Personalization☆46Updated 2 months ago
- Video Generation, Physical Commonsense, Semantic Adherence, VideoCon-Physics☆145Updated 3 months ago
- official repo for "VideoScore: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation" [EMNLP2024]☆95Updated 6 months ago
- ☆45Updated 4 months ago
- [NeurIPS 2024] The official implement of research paper "FreeLong : Training-Free Long Video Generation with SpectralBlend Temporal Atten…☆56Updated last month
- Lumos Project: Frontier generative model research by Alibaba DAMO Academy, including Lumos-1, etc.☆127Updated last month
- Official Implementation of VideoDPO☆135Updated 2 months ago
- Training-free Guidance in Text-to-Video Generation via Multimodal Planning and Structured Noise Initialization☆23Updated 4 months ago
- [ICLR 2024] LLM-grounded Video Diffusion Models (LVD): official implementation for the LVD paper☆156Updated last year
- [NeurIPS 2024] COVE: Unleashing the Diffusion Feature Correspondence for Consistent Video Editing☆24Updated 8 months ago
- Code release for "PISA Experiments: Exploring Physics Post-Training for Video Diffusion Models by Watching Stuff Drop" (ICML 2025)☆39Updated 3 months ago
- A list of works on video generation towards world model☆164Updated 2 weeks ago
- GoT-R1: Unleashing Reasoning Capability of MLLM for Visual Generation with Reinforcement Learning☆94Updated 3 months ago
- [CVPR 2025] T2V-CompBench: A Comprehensive Benchmark for Compositional Text-to-video Generation☆91Updated 2 months ago
- Implementation of "S^2-Guidance: Stochastic Self Guidance for Training-Free Enhancement of Diffusion Models"☆98Updated last week
- VideoREPA: Learning Physics for Video Generation through Relational Alignment with Foundation Models☆56Updated 2 months ago
- [CVPR 2024] BIVDiff: A Training-free Framework for General-Purpose Video Synthesis via Bridging Image and Video Diffusion Models☆75Updated 11 months ago
- [CVPR2025 Highlight] PAR: Parallelized Autoregressive Visual Generation. https://yuqingwang1029.github.io/PAR-project☆172Updated 5 months ago
- Official source codes of "TweedieMix: Improving Multi-Concept Fusion for Diffusion-based Image/Video Generation" (ICLR 2025)☆57Updated 7 months ago
- ☆50Updated 8 months ago
- Code for FreeTraj, a tuning-free method for trajectory-controllable video generation☆104Updated last year
- [CVPR 25] A framework named B^2-DiffuRL for RL-based diffusion model fine-tuning.☆35Updated 5 months ago
- CVPRW 2025 paper Progressive Autoregressive Video Diffusion Models: https://arxiv.org/abs/2410.08151☆82Updated 3 months ago
- [NeurIPS 2024] Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation☆67Updated 10 months ago