baaivision / vid2vid-zeroLinks
Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion Models
☆356Updated 2 years ago
Alternatives and similar repositories for vid2vid-zero
Users that are interested in vid2vid-zero are comparing it to the libraries listed below
Sorting:
- Video-P2P: Video Editing with Cross-attention Control☆424Updated 6 months ago
- Subject-Diffusion:Open Domain Personalized Text-to-Image Generation without Test-time Fine-tuning☆314Updated last year
- Official implementation for "ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing"☆231Updated 2 years ago
- The official implementation for "Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising".☆305Updated 2 months ago
- Official Implementation of "Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models"☆401Updated 2 years ago
- [CVPR 2024] PAIR Diffusion: A Comprehensive Multimodal Object-Level Image Editor☆522Updated last year
- LVDM: Latent Video Diffusion Models for High-Fidelity Long Video Generation☆500Updated last year
- [ICLR 2024] Code for FreeNoise based on VideoCrafter☆426Updated 4 months ago
- [SIGGRAPH Asia 2024] ReVersion: Diffusion-Based Relation Inversion from Images☆506Updated 3 months ago
- Make-A-Protagonist: Generic Video Editing with An Ensemble of Experts☆323Updated 2 years ago
- ELITE: Encoding Visual Concepts into Textual Embeddings for Customized Text-to-Image Generation (ICCV 2023, Oral)☆542Updated 2 years ago
- NeurIPS 2023, Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models☆428Updated last year
- [WACV 2024] Training-Free Layout Control with Cross-Attention Guidance☆266Updated last year
- ☆475Updated 6 months ago
- [NeurIPS'23] "MagicBrush: A Manually Annotated Dataset for Instruction-Guided Image Editing".☆398Updated 10 months ago
- [ICCV 2023] BoxDiff: Text-to-Image Synthesis with Training-Free Box-Constrained Diffusion☆274Updated last year
- PyTorch implementation of InstructDiffusion, a unifying and generic framework for aligning computer vision tasks with human instructions.☆441Updated last year
- ☆238Updated 2 years ago
- Implementation of DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing☆226Updated 2 years ago
- [NeurIPS 2023] Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models☆665Updated last year
- Retrieval-Augmented Video Generation for Telling a Story☆259Updated last year
- [ICLR 2024] Official pytorch implementation of "ControlVideo: Training-free Controllable Text-to-Video Generation"☆854Updated 2 years ago
- Official Pytorch Implementation for "VidToMe: Video Token Merging for Zero-Shot Video Editing" (CVPR 2024)☆229Updated 11 months ago
- [IEEE TVCG 2024] Customized Video Generation Using Textual and Structural Guidance☆195Updated last year
- This is an unofficial PyTorch implementation of StyleDrop: Text-to-Image Generation in Any Style.☆225Updated 2 years ago
- Official PyTorch implementation of the paper "In-Context Learning Unlocked for Diffusion Models"☆413Updated last year
- ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation [TMLR 2024]☆256Updated last year
- Official implementations for paper: LivePhoto: Real Image Animation with Text-guided Motion Control☆196Updated last month
- This respository contains the code for the CVPR 2023 paper SINE: SINgle Image Editing with Text-to-Image Diffusion Models.☆189Updated 2 years ago
- Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis☆320Updated 2 years ago