jeremyssocial / EzEbLinks
EbSynth is hard to use... Lot's of turning videos into image sequences, resizing style images to fit the original frames, renaming the style images to be named like the original frame and lot's more. I didn't want to do that every single time so I just automated it. Kind of. And it's still a work in progress. But it does what it's supposed to do…
☆40Updated last year
Alternatives and similar repositories for EzEb
Users that are interested in EzEb are comparing it to the libraries listed below
Sorting:
- "Interactive Video Stylization Using Few-Shot Patch-Based Training" by O. Texler et al. in PyTorch Lightning☆69Updated 3 years ago
- converts huggingface diffusers stablediffussion models to stablediffusion ckpt files usable in most opensource tools☆53Updated 2 years ago
- Upscaling Karlo text-to-image generation using Stable Diffusion v2.☆62Updated 2 years ago
- AI video temporal coherence Lab☆56Updated 2 years ago
- Text to Video☆26Updated 2 years ago
- Unofficial implementation of Encoder-based Domain Tuning for Fast Personalization of Text-to-Image Models☆24Updated 2 months ago
- a fork implementation of SIGGRAPH 2020 paper Interactive Video Stylization Using Few-Shot Patch-Based Training☆106Updated 2 years ago
- This is a Gradio WebUI working with the Diffusers format of Stable Diffusion☆81Updated 2 years ago
- ☆62Updated last year
- stylegan3_blending☆39Updated 3 years ago
- A seamless / blended tiling module for PyTorch, capable of blending any 4D NCHW tensors together☆28Updated 7 months ago
- This is a wrapper of rem_bg for auto1111's stable diffusion gui. It can do clothing segmentation, background removal, and background mask…☆80Updated last year
- ☆160Updated 2 years ago
- A latent text-to-image diffusion model☆67Updated 2 years ago
- 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch☆53Updated 2 years ago
- Embedding editor extension for web ui☆74Updated last year
- [CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting☆51Updated 3 years ago
- Personal GPEN scripts within the GPEN-Windows stand-alone package.☆21Updated 3 years ago
- Windows compatible code for the paper "Jukebox: A Generative Model for Music"☆13Updated 2 years ago
- Motion Module fine tuner for AnimateDiff.☆78Updated last year
- A simple program to create easy animated effects from an image, and convert them into a set amount of exported frames.☆39Updated last year
- Simple, expressive, pythonic datatypes for manipulating curves parameterized by keyframes and interpolators.☆36Updated last year
- Generate images from an initial frame and text☆37Updated 2 years ago
- Small script for AUTOMATIC1111/stable-diffusion-webui to run video through img2img.☆61Updated 2 years ago
- jupyter/colab implementation of stable-diffusion using k_lms sampler, cpu draw manual seeding, and quantize.py fix☆38Updated 2 years ago
- resources for creating Ininite zoom video using Stable Diffiusion, you can use multiple prompts and it is easy to use.☆90Updated 2 years ago
- adaptation of huggingface's dreambooth training script to support depth2img☆101Updated 2 years ago
- Extension for AUTOMATIC1111 which can generate infinite loop videos in minutes.☆49Updated 2 years ago
- openai guided diffusion tweaks☆52Updated 2 years ago
- Let us control diffusion models!☆36Updated 2 years ago