kabachuha / InfiNetLinks
Implementation of DiffusionOverDiffusion architecture presented in NUWA-XL in a form of ControlNet-like module on top of ModelScope text2video model for extremely long video generation.
☆85Updated 2 years ago
Alternatives and similar repositories for InfiNet
Users that are interested in InfiNet are comparing it to the libraries listed below
Sorting:
- Controlnet extension of AnimateDiff.☆53Updated last year
- AnimationDiff with train☆120Updated last year
- ☆72Updated last year
- [TOG 2024]StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter☆235Updated last month
- [ICLR 2024] Code for FreeNoise based on AnimateDiff☆106Updated last year
- Fine-Grained Subject-Specific Attribute Expression Control in T2I Models☆121Updated 3 months ago
- A retrain of AnimateDiff to be conditional on an init image☆33Updated last year
- Official implementation of UniCtrl: Improving the Spatiotemporal Consistency of Text-to-Video Diffusion Models via Training-Free Unified …☆70Updated 6 months ago
- Create transparent image with Diffusers!☆53Updated 3 months ago
- ☆78Updated last year
- This respository contains the code for the NeurIPS 2024 paper SF-V: Single Forward Video Generation Model.☆96Updated 6 months ago
- [ECCV 2024] HumanRefiner: Benchmarking Abnormal Human Generation and Refining with Coarse-to-fine Pose-Reversible Guidance☆47Updated 7 months ago
- AnimateDiff I2V version.☆185Updated last year
- Implementation of HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models☆172Updated last year
- ☆87Updated 8 months ago
- Official implementation for "pOps: Photo-Inspired Diffusion Operators"☆81Updated 10 months ago
- Official Pytorch Implementation for "VideoControlNet: A Motion-Guided Video-to-Video Translation Framework by Using Diffusion Model with …☆120Updated last year
- ☆91Updated last year
- ControlAnimate Library☆48Updated last year
- Official Implementation for "A Neural Space-Time Representation for Text-to-Image Personalization" (SIGGRAPH Asia 2023)☆177Updated last year
- Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image Diffusion Models (ICLR 2024)☆138Updated last year
- Magic Mirror: ID-Preserved Video Generation in Video Diffusion Transformers☆117Updated 4 months ago
- I2V-Adapter: A General Image-to-Video Adapter for Video Diffusion Models☆204Updated last year
- [NeurIPS 2023 Spotlight] Real-World Image Variation by Aligning Diffusion Inversion Chain☆151Updated last year
- Official implementation of the paper "Improving Sample Quality of Diffusion Models Using Self-Attention Guidance" (ICCV`23)☆117Updated 9 months ago
- [ECCV 2024] Official PyTorch implementation of "Getting it Right: Improving Spatial Consistency in Text-to-Image Models"☆99Updated 10 months ago
- Official Pytorch Implementation for "Space-Time Diffusion Features for Zero-Shot Text-Driven Motion Transfer""☆176Updated last year
- [SIGGRAPH Asia 2024] TrailBlazer: Trajectory Control for Diffusion-Based Video Generation☆99Updated last year
- Implementation of "SCEdit: Efficient and Controllable Image Diffusion Generation via Skip Connection Editing"☆85Updated last year
- [NeurIPS 2024] RectifID: Personalizing Rectified Flow with Anchored Classifier Guidance☆128Updated 7 months ago