KyujinHan / Awesome-Training-Free-WAN2.1-EditingLinks
Training-Free (Inversion-Free) methods meet WAN2.1-T2Vš¤
ā60Updated last month
Alternatives and similar repositories for Awesome-Training-Free-WAN2.1-Editing
Users that are interested in Awesome-Training-Free-WAN2.1-Editing are comparing it to the libraries listed below
Sorting:
- Official implementation of "Perception-as-Control: Fine-grained Controllable Image Animation with 3D-aware Motion Representation" (ICCV 2ā¦ā80Updated 5 months ago
- [SIGGRAPH Asia 2024] I2VEdit: First-Frame-Guided Video Editing via Image-to-Video Diffusion Modelsā80Updated 7 months ago
- [CVPR'25] StyleMaster: Stylize Your Video with Artistic Generation and Translationā167Updated 2 months ago
- This is the official implementation for DragVideoā55Updated last year
- [AAAI'25] Official implementation of Image Conductor: Precision Control for Interactive Video Synthesisā101Updated last year
- Official PyTorch implementation - Video Motion Transfer with Diffusion Transformersā77Updated 6 months ago
- ā91Updated last year
- [CVPR 2025 Oral] Alias-free Latent Diffusion Models (official implementation)ā107Updated last month
- Video-GPT via Next Clip Diffusion.ā44Updated 8 months ago
- ā52Updated last month
- Official repository for HOComp: Interaction-Aware Human-Object Compositionā29Updated last month
- [ICCV 2025] MagicMirror: ID-Preserved Video Generation in Video Diffusion Transformersā128Updated 7 months ago
- [ICLR 2025] Trajectory Attention For Fine-grained Video Motion Controlā96Updated 8 months ago
- HyperMotion is a pose guided human image animation framework based on a large-scale video diffusion Transformer.ā134Updated 6 months ago
- This is the project for 'Any2Caption', Interpreting Any Condition to Caption for Controllable Video Generationā49Updated 9 months ago
- [CVPR'25 Highlight] Official implementation for paper - LeviTor: 3D Trajectory Oriented Image-to-Video Synthesisā157Updated 9 months ago
- Generative Omnimatte (CVPR 2025)ā161Updated 7 months ago
- Phantom-Data: Towards a General Subject-Consistent Video Generation Datasetā104Updated 2 months ago
- [ECCV2024] RegionDrag: Fast Region-Based Image Editing with Diffusion Modelsā62Updated last year
- [NeurIPS 2024] COVE: Unleashing the Diffusion Feature Correspondence for Consistent Video Editingā25Updated last year
- A collection of diffusion models based on FLUX/DiT for image/video generation, editing, reconstruction, inpainting .etc.ā85Updated 7 months ago
- Training-Free Text-Guided Image Editing Using Visual Autoregressive Modelā71Updated 9 months ago
- Implementation Code for Omni-Effectsā173Updated last month
- Awesome Controllable Video Generation with Diffusion Modelsā59Updated 6 months ago
- ā53Updated last month
- Official implementation of "Towards One-Step Causal Video Generation via Adversarial Self-Distillation" (arXiv 2025). A novel framework fā¦ā23Updated 2 months ago
- The official repository of DreamMoverā34Updated last year
- Omni Controllable Video Diffusionā37Updated last month
- Official code for paper: Text-to-Image Rectified Flow as Plug-and-Play Priors [ICLR 2025]ā138Updated 9 months ago
- CCEdit: Creative and Controllable Video Editing via Diffusion Modelsā114Updated last year