jeremyssocial / EzEbLinks
EbSynth is hard to use... Lot's of turning videos into image sequences, resizing style images to fit the original frames, renaming the style images to be named like the original frame and lot's more. I didn't want to do that every single time so I just automated it. Kind of. And it's still a work in progress. But it does what it's supposed to do…
☆40Updated last year
Alternatives and similar repositories for EzEb
Users that are interested in EzEb are comparing it to the libraries listed below
Sorting:
- converts huggingface diffusers stablediffussion models to stablediffusion ckpt files usable in most opensource tools☆53Updated 2 years ago
- "Interactive Video Stylization Using Few-Shot Patch-Based Training" by O. Texler et al. in PyTorch Lightning☆69Updated 3 years ago
- Text to Video☆26Updated 2 years ago
- A seamless / blended tiling module for PyTorch, capable of blending any 4D NCHW tensors together☆28Updated 8 months ago
- AnimationKit: AI Upscaling & Interpolation using Real-ESRGAN+RIFE☆119Updated 3 years ago
- AI video temporal coherence Lab☆56Updated 2 years ago
- An Implementation of Ebsynth for video stylization, and the original ebsynth for image stylization as an importable python library!☆120Updated last year
- [CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting☆51Updated 3 years ago
- Use Runway's Stable-diffusion inpainting model to create an infinite loop video. Inspired by https://twitter.com/matthen2/status/15646087…☆49Updated 2 years ago
- Motion Module fine tuner for AnimateDiff.☆78Updated last year
- Personal GPEN scripts within the GPEN-Windows stand-alone package.☆20Updated 3 years ago
- Upscaling Karlo text-to-image generation using Stable Diffusion v2.☆63Updated 2 years ago
- Generate morph sequences with Stable Diffusion. Interpolate between two or more prompts and create an image at each step.☆118Updated last year
- ☆160Updated 2 years ago
- Translate a video to some AI generated stuff, extension script for AUTOMATIC1111/stable-diffusion-webui.☆53Updated 2 years ago
- a fork implementation of SIGGRAPH 2020 paper Interactive Video Stylization Using Few-Shot Patch-Based Training☆106Updated 2 years ago
- Let us control diffusion models!☆36Updated 2 years ago
- openai guided diffusion tweaks☆52Updated 3 years ago
- This is a wrapper of rem_bg for auto1111's stable diffusion gui. It can do clothing segmentation, background removal, and background mask…☆79Updated last year
- stylegan3_blending☆39Updated 3 years ago
- This is a Gradio WebUI working with the Diffusers format of Stable Diffusion☆81Updated 2 years ago
- resources for creating Ininite zoom video using Stable Diffiusion, you can use multiple prompts and it is easy to use.☆90Updated 2 years ago
- Embedding editor extension for web ui☆74Updated last year
- Windows compatible code for the paper "Jukebox: A Generative Model for Music"☆13Updated 2 years ago
- Unofficial implementation of Encoder-based Domain Tuning for Fast Personalization of Text-to-Image Models☆24Updated 3 months ago
- Fork of Controlnet for 2 input channels☆59Updated 2 years ago
- A notebook for text-based guided image generation using StyleGANXL and CLIP.☆59Updated 2 years ago
- User friendly infinite zoom video generation tool in Colab (based on Stable Diffusion)☆72Updated 2 years ago
- FILM: Frame Interpolation for Large Motion, In arXiv 2022.☆29Updated 3 years ago
- Deep learning toolkit for image, video, and audio synthesis☆108Updated 2 years ago