Francis-Rings / MotionEditor
[CVPR2024] MotionEditor is the first diffusion-based model capable of video motion editing.
☆138Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for MotionEditor
- [CVPR 2024] EvalCrafter: Benchmarking and Evaluating Large Video Generation Models☆141Updated last month
- [CVPR 2024 Highlight] Coarse-to-Fine Latent Diffusion for Pose-Guided Person Image Synthesis☆189Updated 7 months ago
- Official implementation of Image Conductor: Precision Control for Interactive Video Synthesis☆78Updated 4 months ago
- [NeurIPS 2024 Spotlight] The official implement of research paper "MotionBooth: Motion-Aware Customized Text-to-Video Generation"☆110Updated last month
- UniEdit: A Unified Tuning-Free Framework for Video Motion and Appearance Editing☆91Updated 2 weeks ago
- (CVPR 2024) Official code for paper "Towards Language-Driven Video Inpainting via Multimodal Large Language Models"☆68Updated 7 months ago
- [CVPR2024] Official code for Drag Your Noise: Interactive Point-based Editing via Diffusion Semantic Propagation☆81Updated 7 months ago
- [CVPR 2024] Official PyTorch implementation of FreeCustom: Tuning-Free Customized Image Generation for Multi-Concept Composition☆110Updated 2 months ago
- Text-conditioned image-to-video generation based on diffusion models.☆35Updated 5 months ago
- Training-Free Condition-Guided Text-to-Video Generation☆57Updated 10 months ago
- The official repository for ECCV2024 paper "RegionDrag: Fast Region-Based Image Editing with Diffusion Models"☆34Updated last month
- ☆65Updated 5 months ago
- ☆93Updated 4 months ago
- T2V-CompBench: A Comprehensive Benchmark for Compositional Text-to-video Generation☆47Updated 2 months ago
- Official code for VividPose: Advancing Stable Video Diffusion for Realistic Human Image Animation.☆76Updated 4 months ago
- [CVPR`2024, Oral] Attention Calibration for Disentangled Text-to-Image Personalization☆84Updated 7 months ago
- [CVPR2024] VideoBooth: Diffusion-based Video Generation with Image Prompts☆270Updated 5 months ago
- This respository contains the code for the CVPR 2024 paper AVID: Any-Length Video Inpainting with Diffusion Model.☆145Updated 8 months ago
- ☆123Updated last month
- [CVPR 2024] | LAMP: Learn a Motion Pattern for Few-Shot Based Video Generation☆267Updated 6 months ago
- Official GitHub repository for the Text-Guided Video Editing (TGVE) competition of LOVEU Workshop @ CVPR'23.☆73Updated last year
- The HD-VG-130M Dataset☆108Updated 7 months ago
- [NeurIPS 2024] CV-VAE: A Compatible Video VAE for Latent Generative Video Models☆243Updated 2 weeks ago
- ☆94Updated 11 months ago
- Official Repo for Tuning-Free Noise Rectification for High Fidelity Image-to-Video Generation☆26Updated 7 months ago
- [Arxiv 2024] Official pytorch implementation of "VideoElevator: Elevating Video Generation Quality with Versatile Text-to-Image Diffusion…☆147Updated 7 months ago
- [NeurIPS 2023] Free-Bloom: Zero-Shot Text-to-Video Generator with LLM Director and LDM Animator☆91Updated 8 months ago
- ☆193Updated 4 months ago
- Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models (ICLR 2024)☆130Updated 6 months ago
- [CVPR 2024] When StyleGAN Meets Stable Diffusion: a W+ Adapter for Personalized Image Generation☆119Updated 3 months ago