wangqiang9 / Awesome-Controllable-Video-Diffusion
Awesome Controllable Video Generation with Diffusion Models
☆40Updated last week
Alternatives and similar repositories for Awesome-Controllable-Video-Diffusion:
Users that are interested in Awesome-Controllable-Video-Diffusion are comparing it to the libraries listed below
- ☆79Updated 11 months ago
- [ACM MM24] MotionMaster: Training-free Camera Motion Transfer For Video Generation☆90Updated 6 months ago
- Code for FreeTraj, a tuning-free method for trajectory-controllable video generation☆104Updated 9 months ago
- [AAAI-2025] Official implementation of Image Conductor: Precision Control for Interactive Video Synthesis☆91Updated 9 months ago
- [NeurIPS 2024 Spotlight] The official implement of research paper "MotionBooth: Motion-Aware Customized Text-to-Video Generation"☆130Updated 6 months ago
- Code for ICLR 2024 paper "Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators"☆100Updated last year
- Concat-ID: Towards Universal Identity-Preserving Video Synthesis☆36Updated last month
- ☆47Updated 4 months ago
- Official PyTorch implementation - Video Motion Transfer with Diffusion Transformers☆45Updated 3 weeks ago
- This is the project for 'Any2Caption', Interpreting Any Condition to Caption for Controllable Video Generation☆31Updated 3 weeks ago
- Official implementation of "Perception-as-Control: Fine-grained Controllable Image Animation with 3D-aware Motion Representation"☆54Updated 3 weeks ago
- [ARXIV'24] StyleMaster: Stylize Your Video with Artistic Generation and Translation☆103Updated 3 weeks ago
- [CVPR'25 Highlight] Official implementation for paper - LeviTor: 3D Trajectory Oriented Image-to-Video Synthesis☆139Updated last week
- [AAAI 2025] Follow-Your-Canvas: This repo is the official implementation of "Follow-Your-Canvas: Higher-Resolution Video Outpainting with…☆123Updated 6 months ago
- Magic Mirror: ID-Preserved Video Generation in Video Diffusion Transformers☆115Updated 3 months ago
- Interactive Video Generation via Masked-Diffusion☆80Updated last year
- MagicMotion: Controllable Video Generation with Dense-to-Sparse Trajectory Guidance☆109Updated last week
- official repo of paper for "CamI2V: Camera-Controlled Image-to-Video Diffusion Model"☆125Updated last month
- PyTorch implementation of DiffMoE, TC-DiT, EC-DiT and Dense DiT☆73Updated last week
- [Arxiv'25] BlobCtrl: A Unified and Flexible Framework for Element-level Image Generation and Editing☆84Updated last month
- [ICLR 2025] Trajectory Attention For Fine-grained Video Motion Control☆69Updated last week
- CCEdit: Creative and Controllable Video Editing via Diffusion Models☆109Updated 10 months ago
- Subjects200K dataset☆107Updated 3 months ago
- ObjCtrl-2.5D☆43Updated 3 weeks ago
- Official implementation of 'Motion Inversion For Video Customization'☆145Updated 6 months ago
- Affordance-Aware Object Insertion via Mask-Aware Dual Diffusion☆39Updated 2 months ago
- UniCombine: Unified Multi-Conditional Combination with Diffusion Transformer☆76Updated last month
- EVA: Zero-shot Accurate Attributes and Multi-Object Video Editing☆28Updated last year
- [CVPR 2025 Oral] Alias-free Latent Diffusion Models (official implementation)☆74Updated last month
- [SIGGRAPH 2024] Motion I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling☆162Updated 6 months ago