vidim-interpolation / vidim-interpolation.github.ioLinks
☆12Updated last year
Alternatives and similar repositories for vidim-interpolation.github.io
Users that are interested in vidim-interpolation.github.io are comparing it to the libraries listed below
Sorting:
- ☆43Updated last year
- [ECCV 2024] Noise Calibration: Plug-and-play Content-Preserving Video Enhancement using Pre-trained Video Diffusion Models☆88Updated last year
- [Unofficial Implementation] Subject-driven Video Generation via Disentangled Identity and Motion☆57Updated last month
- Code for ICLR 2024 paper "Motion Guidance: Diffusion-Based Image Editing with Differentiable Motion Estimators"☆108Updated last month
- [ACM MM24] MotionMaster: Training-free Camera Motion Transfer For Video Generation☆99Updated last year
- HyperMotion is a pose guided human image animation framework based on a large-scale video diffusion Transformer.☆134Updated 6 months ago
- Official code for VividPose: Advancing Stable Video Diffusion for Realistic Human Image Animation.☆86Updated last year
- [CVPR 2025] High-Fidelity Relightable Monocular Portrait Animation with Lighting-Controllable Video Diffusion Model☆59Updated 8 months ago
- StyleCineGAN: Landscape Cinemagraph Generation using a Pre-trained StyleGAN [CVPR 2024]☆45Updated last year
- The official repository of DreamMover☆34Updated last year
- Consistent Human Image and Video Generation with Spatially Conditioned Diffusion☆15Updated 5 months ago
- Code of StyleCrafter on SDXL☆20Updated last year
- [CVPR'25] StyleMaster: Stylize Your Video with Artistic Generation and Translation☆168Updated 2 months ago
- Implementation of "Disentangled Motion Modeling for Video Frame Interpolation", AAAI 2025☆124Updated 9 months ago
- [AAAI'25] Official implementation of Image Conductor: Precision Control for Interactive Video Synthesis☆101Updated last year
- [CVPR 2024] BIVDiff: A Training-free Framework for General-Purpose Video Synthesis via Bridging Image and Video Diffusion Models☆75Updated last year
- This respository contains the code for the CVPR 2024 paper AVID: Any-Length Video Inpainting with Diffusion Model.☆175Updated last year
- ☆67Updated last year
- This respository contains the code for the NeurIPS 2024 paper SF-V: Single Forward Video Generation Model.☆99Updated last year
- ☆49Updated last year
- MotionShop: Zero-Shot Motion Transfer in Video Diffusion Models with Mixture of Score Guidance☆26Updated last year
- ☆30Updated 9 months ago
- A toolkit for computing Video Fréchet Inception Distance (VFID) metrics.☆11Updated last year
- [CVPR'24 - Rebuttal Score 554] GenN2N: Generative NeRF2NeRF Translation☆100Updated last year
- ☆91Updated last year
- CustomDiffusion360: Customizing Text-to-Image Diffusion with Camera Viewpoint Control☆172Updated last year
- Code for FreeTraj, a tuning-free method for trajectory-controllable video generation☆111Updated 4 months ago
- [WACV 2024] Customizing 360-Degree Panoramas through Text-to-Image Diffusion Models☆45Updated last year
- This repo contains the code for PreciseControl project [ECCV'24]☆69Updated last year
- [ACM MM 2023] Official implementation of "Hierarchical Masked 3D Diffusion Model for Video Outpainting"☆108Updated last year