wentianli / awesome-video-editingLinks
A paper list on video editing (in a cinematographic sense) and its related computer vision tasks.
☆56Updated 5 months ago
Alternatives and similar repositories for awesome-video-editing
Users that are interested in awesome-video-editing are comparing it to the libraries listed below
Sorting:
- Awesome diffusion Video-to-Video (V2V). A collection of paper on diffusion model-based video editing, aka. video-to-video (V2V) translati…☆259Updated 5 months ago
 - (CVPR 2024) Official code for paper "Towards Language-Driven Video Inpainting via Multimodal Large Language Models"☆99Updated last year
 - A new multi-shot video understanding benchmark Shot2Story with comprehensive video summaries and detailed shot-level captions.☆157Updated 9 months ago
 - [CVPR2024] MotionEditor is the first diffusion-based model capable of video motion editing.☆180Updated last month
 - [WACV 2025] Follow-Your-Handle: This repo is the official implementation of "MagicStick: Controllable Video Editing via Control Handle Tr…☆95Updated last year
 - [AAAI 2025] Official pytorch implementation of "VideoElevator: Elevating Video Generation Quality with Versatile Text-to-Image Diffusion …☆160Updated last year
 - The official implementation of the paper titled "StableV2V: Stablizing Shape Consistency in Video-to-Video Editing".☆162Updated 10 months ago
 - [AAAI 2025] Follow-Your-Canvas: This repo is the official implementation of "Follow-Your-Canvas: Higher-Resolution Video Outpainting with…☆150Updated 2 months ago
 - [IJCAI 2025 (Oral)] Offical implementation of the paper "MagicTailor: Component-Controllable Personalization in Text-to-Image Diffusion …☆100Updated 5 months ago
 - [[NeurIPS 2025] UltraVideo: High-Quality UHD Video Dataset with Comprehensive Captions☆67Updated 3 months ago
 - Finetuning and inference tools for the CogView4 and CogVideoX model series.☆100Updated 5 months ago
 - ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation [TMLR 2024]☆254Updated last year
 - The official implementation of paper: DreamMix: Decoupling Object Attributes for Enhanced Editability in Customized Image Inpainting☆120Updated 10 months ago
 - code for "MVOC:atraining-free multiple video object composition method with diffusion models"☆23Updated last year
 - [ICCV 2025] CreatiLayout: Siamese Multimodal Diffusion Transformer for Creative Layout-to-Image Generation☆116Updated 2 months ago
 - [NeurIPS 2024] VidProM: A Million-scale Real Prompt-Gallery Dataset for Text-to-Video Diffusion Models☆163Updated last year
 - [CVPR'25] StyleMaster: Stylize Your Video with Artistic Generation and Translation☆143Updated 3 months ago
 - Official Repository for Sakuga-42M Dataset☆63Updated last year
 - TextCrafter: Accurately Rendering Multiple Texts in Complex Visual Scenes☆81Updated 2 months ago
 - ☆92Updated 8 months ago
 - [ICCV2025] DCM: Dual-Expert Consistency Model for Efficient and High-Quality Video Generation☆195Updated 4 months ago
 - [ICCV 2025] Code & Data for: SuperEdit - Rectifying and Facilitating Supervision for Instruction-Based Image Editing☆160Updated 4 months ago
 - [ICCV 2025] MagicMirror: ID-Preserved Video Generation in Video Diffusion Transformers☆126Updated 4 months ago
 - 【CVPR 2025 Oral】Official Repo for Paper "AnyEdit: Mastering Unified High-Quality Image Editing for Any Idea"☆194Updated 6 months ago
 - ☆124Updated last year
 - [ICLR 2024] LLM-grounded Video Diffusion Models (LVD): official implementation for the LVD paper☆158Updated last year
 - [NeurIPS 2024] Official Implementation of CLIPAway☆101Updated 5 months ago
 - Implementation of InstructEdit☆74Updated 2 years ago
 - Nano-consistent-150k☆221Updated 2 weeks ago
 - Official Repo for Paper "OmniEdit: Building Image Editing Generalist Models Through Specialist Supervision" [ICLR2025]☆129Updated 9 months ago