francislabountyjr / Dreambooth-AnythingLinks
A repository to consolidate stable diffusion finetuning scripts. Train inpainting, depth, v1+, v2+, image variations, image colorization, whatever. Train with optimizations like 8 bit adam and xformers for faster and more memory efficient training.
☆69Updated 2 years ago
Alternatives and similar repositories for Dreambooth-Anything
Users that are interested in Dreambooth-Anything are comparing it to the libraries listed below
Sorting:
- Official Implementation of 'Inserting Anybody in Diffusion Models via Celeb Basis'☆257Updated 2 years ago
- implementation of the IPAdapter models for HF Diffusers☆180Updated 2 years ago
- Fork of Controlnet for 2 input channels☆59Updated 2 years ago
- Implementation of HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models☆175Updated 2 years ago
- A diffusers based implementation of HyperDreamBooth☆137Updated 2 years ago
- ☆118Updated 3 years ago
- Official implementation of the NeurIPS 2023 paper "Photoswap: Personalized Subject Swapping in Images"☆349Updated last year
- Mixture of Diffusers for scene composition and high resolution image generation☆447Updated 2 years ago
- Forked version of AnimateDiff, attempts to add init images. If you are look into original repo, please go to https://github.com/guoyww/a…☆153Updated 2 years ago
- ☆184Updated 2 years ago
- Transfer the T2I-Adapter with any basemodel in diffusers🔥☆136Updated 2 years ago
- AnimateDiff I2V version.☆185Updated last year
- ☆321Updated last year
- ☆90Updated last year
- This is an unofficial PyTorch implementation of StyleDrop: Text-to-Image Generation in Any Style.☆226Updated 2 years ago
- Implementation of Encoder-based Domain Tuning for Fast Personalization of Text-to-Image Models☆324Updated 2 years ago
- adaptation of huggingface's dreambooth training script to support depth2img☆101Updated 3 years ago
- Stable Diffusion-based image manipulation method with a sketch and reference image☆183Updated 2 years ago
- We show you how to train a ControlNet with your own control hint in diffusers framework☆60Updated 2 years ago
- Official PyTorch codes for the paper: "ViCo: Detail-Preserving Visual Condition for Personalized Text-to-Image Generation"☆244Updated last year
- Apply controlnet to video clips☆82Updated last year
- ☆135Updated 3 years ago
- An Implementation of Ebsynth for video stylization, and the original ebsynth for image stylization as an importable python library!☆134Updated last year
- ☆55Updated last year
- A simple extension of Controlnet for color condition☆90Updated last year
- ControlAnimate Library☆48Updated 2 years ago
- resources for creating Ininite zoom video using Stable Diffiusion, you can use multiple prompts and it is easy to use.☆88Updated 2 years ago
- [Arxiv 2023] img2img version of stable diffusion. Line Art Automatic Coloring. Anime Character Remix. Style Transfer.☆150Updated 6 months ago
- ☆86Updated 2 years ago
- Proof of concept for control landmarks in diffusion models!☆90Updated 2 years ago