sfanxiang / videoshopLinks
Videoshop: Localized Semantic Video Editing with Noise-Extrapolated Diffusion Inversion
☆43Updated last year
Alternatives and similar repositories for videoshop
Users that are interested in videoshop are comparing it to the libraries listed below
Sorting:
- Code for FreeTraj, a tuning-free method for trajectory-controllable video generation☆106Updated this week
- [ICLR 2024] LLM-grounded Video Diffusion Models (LVD): official implementation for the LVD paper☆156Updated last year
- [NeurIPS 2024 Spotlight] The official implement of research paper "MotionBooth: Motion-Aware Customized Text-to-Video Generation"☆137Updated 11 months ago
- [NeurIPS 2024] Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation☆67Updated 10 months ago
- CVPRW 2025 paper Progressive Autoregressive Video Diffusion Models: https://arxiv.org/abs/2410.08151☆83Updated 4 months ago
- [CVPR`2024, Oral] Attention Calibration for Disentangled Text-to-Image Personalization☆105Updated last year
- ☆79Updated 2 years ago
- [CVPR 2024] BIVDiff: A Training-free Framework for General-Purpose Video Synthesis via Bridging Image and Video Diffusion Models☆75Updated last year
- Collaborative Score Distillation for Consistent Visual Synthesis (NeurIPS 2023)☆119Updated last year
- 🏞️ Official implementation of "Gen4Gen: Generative Data Pipeline for Generative Multi-Concept Composition"☆109Updated last year
- Official source codes of "TweedieMix: Improving Multi-Concept Fusion for Diffusion-based Image/Video Generation" (ICLR 2025)☆57Updated 7 months ago
- Code for ICLR 2024 paper "Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators"☆106Updated last year
- ☆85Updated last year
- Interactive Video Generation via Masked-Diffusion☆83Updated last year
- ☆64Updated last year
- [ACM Multimedia 2025 Datasets Track] EditWorld: Simulating World Dynamics for Instruction-Following Image Editing☆135Updated last month
- [NeurIPS 2023] Free-Bloom: Zero-Shot Text-to-Video Generator with LLM Director and LDM Animator☆95Updated last year
- Official PyTorch implementation - Video Motion Transfer with Diffusion Transformers☆71Updated last month
- [Neurips 2024] Video Diffusion Models are Training-free Motion Interpreter and Controller☆46Updated last month
- Directed Diffusion: Direct Control of Object Placement through Attention Guidance (AAAI2024)☆80Updated last year
- Official implementation for "LOVECon: Text-driven Training-free Long Video Editing with ControlNet"☆43Updated last year
- [ECCV 2024 Oral] ConceptExpress: Harnessing Diffusion Models for Single-image Unsupervised Concept Extraction☆72Updated last year
- [SIGGRAPH Asia 2024] TrailBlazer: Trajectory Control for Diffusion-Based Video Generation☆100Updated last year
- [arXiv 2024] I4VGen: Image as Free Stepping Stone for Text-to-Video Generation☆24Updated 11 months ago
- [ICCV 2023 Oral, Best Paper Finalist] ITI-GEN: Inclusive Text-to-Image Generation☆68Updated last year
- Official project of paper "MagDiff: Multi-Alignment Diffusion for High-Fidelity Video Generation and Editing"☆29Updated 8 months ago
- Official implementation of the paper "MotionCrafter: One-Shot Motion Customization of Diffusion Models"☆28Updated last year
- [ICLR 2024] Official PyTorch/Diffusers implementation of "Object-aware Inversion and Reassembly for Image Editing"☆88Updated last year
- [AAAI'25] Official implementation of Image Conductor: Precision Control for Interactive Video Synthesis☆96Updated last year
- Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models (ICLR 2024)☆139Updated last year