jianzongwu / Language-Driven-Video-Inpainting
(CVPR 2024) Official code for paper "Towards Language-Driven Video Inpainting via Multimodal Large Language Models"
☆64Updated 6 months ago
Related projects ⓘ
Alternatives and complementary repositories for Language-Driven-Video-Inpainting
- [CVPR 2024] BIVDiff: A Training-free Framework for General-Purpose Video Synthesis via Bridging Image and Video Diffusion Models☆61Updated last month
- [NeurIPS 2024 Spotlight] The official implement of research paper "MotionBooth: Motion-Aware Customized Text-to-Video Generation"☆108Updated last month
- [CVPR`2024, Oral] Attention Calibration for Disentangled Text-to-Image Personalization☆84Updated 6 months ago
- Code for FreeTraj, a tuning-free method for trajectory-controllable video generation☆87Updated 3 months ago
- [NeurIPS 2024] Motion Consistency Model: Accelerating Video Diffusion with Disentangled Motion-Appearance Distillation☆48Updated last week
- [ECCV 2024] Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models☆80Updated 2 months ago
- Official implementation of Image Conductor: Precision Control for Interactive Video Synthesis☆77Updated 3 months ago
- Code for ICLR 2024 paper "Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators"☆91Updated 8 months ago
- Official GitHub repository for the Text-Guided Video Editing (TGVE) competition of LOVEU Workshop @ CVPR'23.☆71Updated last year
- ☆39Updated 11 months ago
- [NeurIPS 2023] Free-Bloom: Zero-Shot Text-to-Video Generator with LLM Director and LDM Animator☆90Updated 7 months ago
- CAR: Controllable AutoRegressive Modeling for Visual Generation☆48Updated last month
- [CVPR 2024] Official PyTorch implementation of FreeCustom: Tuning-Free Customized Image Generation for Multi-Concept Composition☆109Updated 2 months ago
- [NeurIPS 2024] EvolveDirector: Approaching Advanced Text-to-Image Generation with Large Vision-Language Models.☆40Updated 3 weeks ago
- [CVPR2024] CapHuman: Capture Your Moments in Parallel Universes☆91Updated 3 months ago
- Text-conditioned image-to-video generation based on diffusion models.☆34Updated 4 months ago
- ClassDiffusion: Official impl. of Paper "ClassDiffusion: More Aligned Personalization Tuning with Explicit Class Guidance"☆33Updated 4 months ago
- [ICLR 24] MaGIC: Multi-modality Guided Image Completion☆46Updated 6 months ago
- ☆61Updated 5 months ago
- ☆118Updated last month
- code for paper "Compositional Text-to-Image Synthesis with Attention Map Control of Diffusion Models"☆39Updated last year
- Official implementation of the paper "MotionCrafter: One-Shot Motion Customization of Diffusion Models"☆25Updated 10 months ago
- ☆32Updated 3 weeks ago
- Training-Free Condition-Guided Text-to-Video Generation☆57Updated 10 months ago
- ☆84Updated 2 months ago
- EVA: Zero-shot Accurate Attributes and Multi-Object Video Editing☆27Updated 7 months ago
- Directed Diffusion: Direct Control of Object Placement through Attention Guidance (AAAI2024)☆76Updated 8 months ago
- [CVPR2024] Official code for Drag Your Noise: Interactive Point-based Editing via Diffusion Semantic Propagation☆81Updated 6 months ago
- T2V-CompBench: A Comprehensive Benchmark for Compositional Text-to-video Generation☆46Updated 2 months ago
- [NeurIPS 2024] RealCompo: Balancing Realism and Compositionality Improves Text-to-Image Diffusion Models☆106Updated last month