dajes / frame-interpolation-pytorch
PyTorch implementation of FILM: Frame Interpolation for Large Motion, In ECCV 2022.
☆133Updated 8 months ago
Related projects: ⓘ
- AnimationDiff with train☆112Updated 6 months ago
- Official Pytorch Implementation for "VideoControlNet: A Motion-Guided Video-to-Video Translation Framework by Using Diffusion Model with …☆113Updated last year
- Official Implementation of 'Inserting Anybody in Diffusion Models via Celeb Basis'☆252Updated 11 months ago
- ☆169Updated 5 months ago
- ☆50Updated this week
- ☆101Updated last year
- Stylizing Video by Example (Jamriska et al., 2019)☆44Updated 7 months ago
- AnimateDiff I2V version.☆175Updated 6 months ago
- Implementation of Encoder-based Domain Tuning for Fast Personalization of Text-to-Image Models☆318Updated last year
- ☆105Updated 2 years ago
- A simple extension of Controlnet for color condition☆75Updated 4 months ago
- [SIGGRAPH Asia 2024 (Journal Track)]StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter☆182Updated 2 months ago
- ☆221Updated last year
- Implementation of DiffusionOverDiffusion architecture presented in NUWA-XL in a form of ControlNet-like module on top of ModelScope text2…☆86Updated last year
- Official Implementation for "A Neural Space-Time Representation for Text-to-Image Personalization" (SIGGRAPH Asia 2023)☆163Updated last year
- Proof of concept for control landmarks in diffusion models!☆86Updated last year
- Implementation of HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models☆160Updated last year
- Official code for "AMT: All-Pairs Multi-Field Transforms for Efficient Frame Interpolation" (CVPR2023)☆214Updated last year
- InstantStyle-Plus: Style Transfer with Content-Preserving in Text-to-Image Generation 🔥☆32Updated 2 months ago
- ControlLoRA Version 2: A Lightweight Neural Network To Control Stable Diffusion Spatial Information Version 2☆98Updated last month
- This respository contains the code for AVID: Any-Length Video Inpainting with Diffusion Model.☆129Updated 6 months ago
- Official Pytorch Implementation for "Space-Time Diffusion Features for Zero-Shot Text-Driven Motion Transfer""☆143Updated 9 months ago
- SigLIP-based Aesthetic Score Predictor☆123Updated 3 months ago
- ☆182Updated last year
- Controlnet extension of AnimateDiff.☆48Updated last year
- img2img version of stable diffusion. Anime Character Remix. Line Art Automatic Coloring. Style Transfer.☆128Updated 5 months ago
- Transfer the T2I-Adapter with any basemodel in diffusers🔥☆132Updated last year
- A retrain of AnimateDiff to be conditional on an init image☆33Updated 11 months ago
- ☆85Updated 7 months ago
- Improved Diffusion-based Image Colorization via Piggybacked Models☆59Updated last year