jianzongwu / Language-Driven-Video-InpaintingLinks
(CVPR 2024) Official code for paper "Towards Language-Driven Video Inpainting via Multimodal Large Language Models"
☆99Updated last year
Alternatives and similar repositories for Language-Driven-Video-Inpainting
Users that are interested in Language-Driven-Video-Inpainting are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] BIVDiff: A Training-free Framework for General-Purpose Video Synthesis via Bridging Image and Video Diffusion Models☆75Updated last year
- [CVPR2024] CapHuman: Capture Your Moments in Parallel Universes☆98Updated 11 months ago
- This repo contains the code for PreciseControl project [ECCV'24]☆68Updated last year
- Implementation code of the paper MIGE: A Unified Framework for Multimodal Instruction-Based Image Generation and Editing☆69Updated 3 months ago
- CCEdit: Creative and Controllable Video Editing via Diffusion Models☆114Updated last year
- This respository contains the code for the CVPR 2024 paper AVID: Any-Length Video Inpainting with Diffusion Model.☆172Updated last year
- Code for the paper "Pix2Video: Video Editing using Image Diffusion"☆74Updated 2 years ago
- 🏞️ Official implementation of "Gen4Gen: Generative Data Pipeline for Generative Multi-Concept Composition"☆108Updated last year
- [ICLR 2024] Continuous-Multiple Image Outpainting in One-Step via Positional Query and A Diffusion-based Approach Link: https://arxiv.o…☆83Updated last year
- UniEdit: A Unified Tuning-Free Framework for Video Motion and Appearance Editing☆111Updated 6 months ago
- Code for FreeTraj, a tuning-free method for trajectory-controllable video generation☆108Updated last month
- [CVPR`2024, Oral] Attention Calibration for Disentangled Text-to-Image Personalization☆105Updated last year
- [CVPR'25] StyleMaster: Stylize Your Video with Artistic Generation and Translation☆143Updated 3 months ago
- ☆33Updated 11 months ago
- Code for ICLR 2024 paper "Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators"☆105Updated last year
- [CVPR2024] MotionEditor is the first diffusion-based model capable of video motion editing.☆180Updated last month
- [NeurIPS 2024] Official Implementation of CLIPAway☆101Updated 4 months ago
- Interactive Video Generation via Masked-Diffusion☆82Updated last year
- [ICLR 2025] HQ-Edit: A High-Quality and High-Coverage Dataset for General Image Editing☆111Updated last year
- [CVPR2024] Official implementation of High-fidelity Person-centric Subject-to-Image Synthesis.☆54Updated 8 months ago
- [ECCV 2024] Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models☆87Updated last year
- [CVPR2024] Official code for Drag Your Noise: Interactive Point-based Editing via Diffusion Semantic Propagation☆86Updated last year
- [NeurIPS 2023] Free-Bloom: Zero-Shot Text-to-Video Generator with LLM Director and LDM Animator☆96Updated last year
- [CVPR 2024 Highlight] Official repo: SCEdit: Efficient and Controllable Image Diffusion Generation via Skip Connection Editing☆51Updated last year
- [ICCV 2025] MagicMirror: ID-Preserved Video Generation in Video Diffusion Transformers☆126Updated 4 months ago
- [NeurIPS 2024 Spotlight] The official implement of research paper "MotionBooth: Motion-Aware Customized Text-to-Video Generation"☆138Updated last year
- ☆104Updated last year
- Text-conditioned image-to-video generation based on diffusion models.☆55Updated last year
- [CVPR 2024] Official PyTorch implementation of FreeCustom: Tuning-Free Customized Image Generation for Multi-Concept Composition☆169Updated 2 months ago
- CVPR-24 | Official codebase for ZONE: Zero-shot InstructiON-guided Local Editing☆80Updated 11 months ago