nicolai256 / Few-Shot-Patch-Based-TrainingLinks
a fork implementation of SIGGRAPH 2020 paper Interactive Video Stylization Using Few-Shot Patch-Based Training
☆106Updated 2 years ago
Alternatives and similar repositories for Few-Shot-Patch-Based-Training
Users that are interested in Few-Shot-Patch-Based-Training are comparing it to the libraries listed below
Sorting:
- ☆160Updated 2 years ago
- Temporal Coherence tools. Automatic1111 extension.☆148Updated 2 years ago
- Automatic1111 Stable Diffusion WebUI extension, increase consistency between images by generating in same latent space.☆80Updated 2 years ago
- converts huggingface diffusers stablediffussion models to stablediffusion ckpt files usable in most opensource tools☆53Updated 2 years ago
- AI video temporal coherence Lab☆56Updated 2 years ago
- Applies mirroring and flips to the latent images to produce anything from subtle balanced compositions to perfect reflections☆113Updated last year
- Extension for AUTOMATIC1111 which can generate infinite loop videos in minutes.☆49Updated 2 years ago
- Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion (tweaks focused on training faces)☆141Updated 2 years ago
- An extension to allow managing custom depth inputs to Stable Diffusion depth2img models for the stable-diffusion-webui repo.☆72Updated 2 years ago
- ☆35Updated 2 years ago
- Backend for my Stable diffusion project(s)☆58Updated 2 years ago
- ☆61Updated last year
- An unofficial implementation of Custom Diffusion for Automatic1111's WebUI.☆69Updated 2 years ago
- Stable Diffusion web UI☆45Updated last year
- Adaptation of the merging method described in the paper - Git Re-Basin: Merging Models modulo Permutation Symmetries (https://arxiv.org/a…☆146Updated last year
- Simple local all-in-one install for IDEA2.ART☆26Updated 2 years ago
- Local image masking tool for stable-diffusion-webui☆106Updated 2 years ago
- adaptation of huggingface's dreambooth training script to support depth2img☆101Updated 2 years ago
- Generate morph sequences with Stable Diffusion. Interpolate between two or more prompts and create an image at each step.☆117Updated 2 years ago
- In stable diffusion, generate a sequence of images shifting attention in the prompt.☆166Updated last year
- Stable Diffusion web UI☆86Updated 2 years ago
- Modify Concepts from Diffusion Models using a dsl☆118Updated 2 years ago
- Craft your visions☆140Updated 2 years ago
- Automatic1111 Stable Diffusion WebUI extension, generates img2img against frames in an animation☆46Updated 2 years ago
- resources for creating Ininite zoom video using Stable Diffiusion, you can use multiple prompts and it is easy to use.☆88Updated 2 years ago
- This is a wrapper of rem_bg for auto1111's stable diffusion gui. It can do clothing segmentation, background removal, and background mask…☆79Updated 2 years ago
- Sends rendered SD_auto1111 images quickly to this panorama (hdri, equirectangular) viewer☆178Updated last year
- 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch☆25Updated 2 years ago
- ☆88Updated last year
- ☆20Updated 3 years ago