GSeanCDAT / GIMM-VFILinks
[NeurIPS 2024] Generalizable Implicit Motion Modeling for Video Frame Interpolation
☆350Updated 4 months ago
Alternatives and similar repositories for GIMM-VFI
Users that are interested in GIMM-VFI are comparing it to the libraries listed below
Sorting:
- Repo for SeedVR2 & SeedVR (CVPR2025 Highlight)☆623Updated 3 months ago
- SeedVR2: One-Step Video Restoration via Diffusion Adversarial Post-Training☆398Updated 3 months ago
- [SIGGRAPH 2025] Official code of the paper "Cobra: Efficient Line Art COlorization with BRoAder References". Cobra:利用更广泛参考图实现高效线稿上色☆216Updated 5 months ago
- HunyuanVideo Keyframe Control Lora is an adapter for HunyuanVideo T2V model for keyframe-based video generation☆163Updated 6 months ago
- The official code of paper "LVCD: Reference-based Lineart Video Colorization with Diffusion Models"☆193Updated 8 months ago
- We achieves high-quality first-frame guided video editing given a reference image, while maintaining flexibility for incorporating additi…☆311Updated last month
- Community trainer for Lightricks' LTX Video model 🎬 ⚡️☆324Updated 2 months ago
- [ICLR'25] Official PyTorch implementation of "Framer: Interactive Frame Interpolation".☆494Updated 8 months ago
- ☆211Updated 4 months ago
- An inference and training framework for multiple image input in Flux Kontext dev☆402Updated last month
- [NeurIPS'25] One-Step Diffusion for Detail-Rich and Temporally Consistent Video Super-Resolution☆277Updated 2 weeks ago
- Official code for AccVideo: Accelerating Video Diffusion Model with Synthetic Dataset☆262Updated 3 months ago
- ComfyUI nodes to edit videos using Genmo Mochi☆294Updated 11 months ago
- Mobius: Text to Seamless Looping Video Generation via Latent Shift☆165Updated 4 months ago
- Calligrapher: Freestyle Text Image Customization☆291Updated last month
- Official implementation of "Normalized Attention Guidance"☆166Updated 3 months ago
- ☆420Updated 11 months ago
- Stand-In is a lightweight, plug-and-play framework for identity-preserving video generation.☆643Updated last month
- [ICML2025] An 8-step inversion and 8-step editing process works effectively with the FLUX-dev model. (3x speedup with results that are co…☆277Updated 5 months ago
- High quality training free inpaint for every stable diffusion model. Supports ComfyUI☆550Updated last week
- Qwen-Image text to image lora trainer☆436Updated last week
- ☆521Updated 3 months ago
- A novel approach to hunyuan image-to-video sampling☆305Updated 8 months ago
- Official codes of VEnhancer: Generative Space-Time Enhancement for Video Generation☆556Updated last year
- FantasyPortrait: Enhancing Multi-Character Portrait Animation with Expression-Augmented Diffusion Transformers☆467Updated last month
- The official code implementation of the paper "OmniConsistency: Learning Style-Agnostic Consistency from Paired Stylization Data."☆404Updated 3 months ago
- [ICCV 2025] Light-A-Video: Training-free Video Relighting via Progressive Light Fusion☆468Updated 3 months ago
- [ECCV 2024] Be-Your-Outpainter https://arxiv.org/abs/2403.13745☆249Updated 5 months ago
- Any-to-Bokeh is a novel one-step video bokeh framework that converts arbitrary input videos into temporally coherent, depth-aware bokeh e…☆112Updated 2 months ago
- DiffuEraser is a diffusion model for video inpainting, which performs great content completeness and temporal consistency while maintaini…☆525Updated 5 months ago