GSeanCDAT / GIMM-VFILinks
[NeurIPS 2024] Generalizable Implicit Motion Modeling for Video Frame Interpolation
☆362Updated 5 months ago
Alternatives and similar repositories for GIMM-VFI
Users that are interested in GIMM-VFI are comparing it to the libraries listed below
Sorting:
- SeedVR2: One-Step Video Restoration via Diffusion Adversarial Post-Training☆457Updated 4 months ago
- Repo for SeedVR2 & SeedVR (CVPR2025 Highlight)☆718Updated 4 months ago
- HunyuanVideo Keyframe Control Lora is an adapter for HunyuanVideo T2V model for keyframe-based video generation☆165Updated 7 months ago
- Community trainer for Lightricks' LTX Video model 🎬 ⚡️☆355Updated 2 weeks ago
- [ICLR'25] Official PyTorch implementation of "Framer: Interactive Frame Interpolation".☆497Updated 10 months ago
- We achieves high-quality first-frame guided video editing given a reference image, while maintaining flexibility for incorporating additi…☆319Updated 2 months ago
- [SIGGRAPH 2025] Official code of the paper "Cobra: Efficient Line Art COlorization with BRoAder References". Cobra:利用更广泛参考图实现高效线稿上色☆225Updated 6 months ago
- The official code of paper "LVCD: Reference-based Lineart Video Colorization with Diffusion Models"☆194Updated 10 months ago
- ☆424Updated last year
- ☆215Updated 6 months ago
- Official code for AccVideo: Accelerating Video Diffusion Model with Synthetic Dataset☆267Updated 5 months ago
- Mobius: Text to Seamless Looping Video Generation via Latent Shift☆165Updated 6 months ago
- ComfyUI nodes to edit videos using Genmo Mochi☆295Updated last year
- Official implementation of "Normalized Attention Guidance"☆171Updated 4 months ago
- [ICML2025] An 8-step inversion and 8-step editing process works effectively with the FLUX-dev model. (3x speedup with results that are co…☆279Updated 6 months ago
- [NeurIPS'25] One-Step Diffusion for Detail-Rich and Temporally Consistent Video Super-Resolution☆300Updated last week
- ☆531Updated 4 months ago
- Official implementation for "DyPE: Dynamic Position Extrapolation for Ultra High Resolution Diffusion".☆261Updated 2 weeks ago
- [ICCV 2025] Light-A-Video: Training-free Video Relighting via Progressive Light Fusion☆481Updated 3 weeks ago
- DiffuEraser is a diffusion model for video inpainting, which performs great content completeness and temporal consistency while maintaini…☆557Updated 6 months ago
- An inference and training framework for multiple image input in Flux Kontext dev☆417Updated 2 months ago
- The official code implementation of the paper "OmniConsistency: Learning Style-Agnostic Consistency from Paired Stylization Data."☆409Updated 5 months ago
- MoCha: End-to-End Video Character Replacement without Structural Guidance☆398Updated last week
- Official codes of VEnhancer: Generative Space-Time Enhancement for Video Generation☆559Updated last year
- FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformers☆480Updated 2 months ago
- InstantIR: Blind Image Restoration with Instant Generative Reference 🔥☆526Updated last year
- Stand-In is a lightweight, plug-and-play framework for identity-preserving video generation.☆659Updated 2 months ago
- High quality training free inpaint for every stable diffusion model. Supports ComfyUI☆673Updated this week
- A novel approach to hunyuan image-to-video sampling☆303Updated 9 months ago
- Enhance-A-Video: Better Generated Video for Free☆581Updated 7 months ago