alimama-creative / M3DDM-Video-Outpainting
Official repo for Hierarchical Masked 3D Diffusion Model for Video Outpainting
☆92Updated 11 months ago
Alternatives and similar repositories for M3DDM-Video-Outpainting:
Users that are interested in M3DDM-Video-Outpainting are comparing it to the libraries listed below
- This respository contains the code for the NeurIPS 2024 paper SF-V: Single Forward Video Generation Model.☆96Updated 4 months ago
- [AAAI 2025] Follow-Your-Canvas: This repo is the official implementation of "Follow-Your-Canvas: Higher-Resolution Video Outpainting with…☆121Updated 6 months ago
- ☆95Updated 7 months ago
- [Arxiv 2024] Edicho: Consistent Image Editing in the Wild☆114Updated 3 months ago
- [ARXIV'24] StyleMaster: Stylize Your Video with Artistic Generation and Translation☆101Updated 3 weeks ago
- [ECCV 2024] Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models☆87Updated 7 months ago
- Official code for VividPose: Advancing Stable Video Diffusion for Realistic Human Image Animation.☆81Updated 9 months ago
- Magic Mirror: ID-Preserved Video Generation in Video Diffusion Transformers☆113Updated 3 months ago
- [AAAI-2025] Official implementation of Image Conductor: Precision Control for Interactive Video Synthesis☆91Updated 9 months ago
- [CVPR2024] Official code for Drag Your Noise: Interactive Point-based Editing via Diffusion Semantic Propagation☆86Updated last year
- [Arxiv'25] BlobCtrl: A Unified and Flexible Framework for Element-level Image Generation and Editing☆80Updated 3 weeks ago
- Subjects200K dataset☆107Updated 3 months ago
- The official implementation of the paper titled "StableV2V: Stablizing Shape Consistency in Video-to-Video Editing".☆151Updated 4 months ago
- This respository contains the code for the CVPR 2024 paper AVID: Any-Length Video Inpainting with Diffusion Model.☆163Updated last year
- ☆48Updated 3 months ago
- [SIGGRAPH 2024] Motion I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling☆158Updated 6 months ago
- [CVPR 2025] Official implementation of the paper "Generative Inbetweening through Frame-wise Conditions-Driven Video Generation"☆88Updated last month
- Official code of "Edit Transfer: Learning Image Editing via Vision In-Context Relations"☆70Updated this week
- UniCombine: Unified Multi-Conditional Combination with Diffusion Transformer☆62Updated 3 weeks ago
- Pytorch Implementation of "SSR-Encoder: Encoding Selective Subject Representation for Subject-Driven Generation"(CVPR 2024)☆113Updated 8 months ago
- [NeurIPS 2024 Spotlight] The official implement of research paper "MotionBooth: Motion-Aware Customized Text-to-Video Generation"☆130Updated 6 months ago
- Official implementation of 'Motion Inversion For Video Customization'☆145Updated 5 months ago
- [ACM MM24] MotionMaster: Training-free Camera Motion Transfer For Video Generation☆90Updated 6 months ago
- [CVPR 2024] BIVDiff: A Training-free Framework for General-Purpose Video Synthesis via Bridging Image and Video Diffusion Models☆71Updated 7 months ago
- Official implementation of "Slicedit: Zero-Shot Video Editing With Text-to-Image Diffusion Models Using Spatio-Temporal Slices" (ICML 202…☆56Updated 4 months ago
- Maximize the Resolution Potential of Pre-trained Rectified Flow Transformers☆50Updated 6 months ago
- MasterWeaver: Taming Editability and Face Identity for Personalized Text-to-Image Generation (ECCV 2024)☆133Updated 8 months ago
- Code for FreeScale, a tuning-free method for higher-resolution visual generation☆121Updated last month
- [ECCV 2024] AnyControl, a multi-control image synthesis model that supports any combination of user provided control signals. 一个支持用户自由输入控…☆123Updated 9 months ago
- InstantStyle-Plus: Style Transfer with Content-Preserving in Text-to-Image Generation 🔥☆114Updated 9 months ago