jeremyssocial / EzEbLinks
EbSynth is hard to use... Lot's of turning videos into image sequences, resizing style images to fit the original frames, renaming the style images to be named like the original frame and lot's more. I didn't want to do that every single time so I just automated it. Kind of. And it's still a work in progress. But it does what it's supposed to do…
☆40Updated last year
Alternatives and similar repositories for EzEb
Users that are interested in EzEb are comparing it to the libraries listed below
Sorting:
- An Implementation of Ebsynth for video stylization, and the original ebsynth for image stylization as an importable python library!☆117Updated 10 months ago
- stylegan3_blending☆39Updated 3 years ago
- Let us control diffusion models!☆36Updated 2 years ago
- "Interactive Video Stylization Using Few-Shot Patch-Based Training" by O. Texler et al. in PyTorch Lightning☆69Updated 3 years ago
- converts huggingface diffusers stablediffussion models to stablediffusion ckpt files usable in most opensource tools☆53Updated 2 years ago
- AI video temporal coherence Lab☆55Updated 2 years ago
- Motion Module fine tuner for AnimateDiff.☆78Updated last year
- Translate a video to some AI generated stuff, extension script for AUTOMATIC1111/stable-diffusion-webui.☆52Updated 2 years ago
- Fork of Controlnet for 2 input channels☆59Updated last year
- Unofficial implementation of Encoder-based Domain Tuning for Fast Personalization of Text-to-Image Models☆24Updated 2 weeks ago
- a fork implementation of SIGGRAPH 2020 paper Interactive Video Stylization Using Few-Shot Patch-Based Training☆106Updated 2 years ago
- Qt based Linux/Windows GUI for Stable Diffusion☆34Updated 2 years ago
- Personal GPEN scripts within the GPEN-Windows stand-alone package.☆20Updated 2 years ago
- ☆61Updated last year
- Video restoration Processing Pipeline☆30Updated last year
- 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch☆52Updated 2 years ago
- A seamless / blended tiling module for PyTorch, capable of blending any 4D NCHW tensors together☆27Updated 5 months ago
- Embedding editor extension for web ui☆73Updated last year
- Automatic1111 Stable Diffusion WebUI extension, generates img2img against frames in an animation☆45Updated 2 years ago
- An extension to allow managing custom depth inputs to Stable Diffusion depth2img models for the stable-diffusion-webui repo.☆71Updated 2 years ago
- Stylizing Video by Example (Jamriska et al., 2019)☆47Updated last year
- Generate morph sequences with Stable Diffusion. Interpolate between two or more prompts and create an image at each step.☆116Updated last year
- Home of the Chunkmogrify project☆127Updated 3 years ago
- Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation☆23Updated 2 years ago
- resources for creating Ininite zoom video using Stable Diffiusion, you can use multiple prompts and it is easy to use.☆89Updated last year
- ☆13Updated 3 years ago
- Extension for AUTOMATIC1111 which can generate infinite loop videos in minutes.☆48Updated 2 years ago
- frame interpolation for CLIP guided videos☆15Updated 2 years ago
- Upscaling Karlo text-to-image generation using Stable Diffusion v2.☆60Updated 2 years ago
- This is a Gradio WebUI working with the Diffusers format of Stable Diffusion☆80Updated 2 years ago