timothybrooks / instruct-pix2pixLinks
☆6,778Updated last year
Alternatives and similar repositories for instruct-pix2pix
Users that are interested in instruct-pix2pix are comparing it to the libraries listed below
Sorting:
- ☆3,376Updated last year
- Using Low-rank adaptation to quickly fine-tune diffusion models.☆7,427Updated last year
- Let us control diffusion models!☆32,956Updated last year
- Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)☆1,964Updated last year
- ☆3,032Updated 2 years ago
- Outpainting with Stable Diffusion on an infinite canvas☆3,884Updated 2 years ago
- Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion☆7,752Updated 2 years ago
- ☆7,840Updated last year
- Edit anything in images powered by segment-anything, ControlNet, StableDiffusion, etc. (ACM MM)☆3,412Updated 6 months ago
- T2I-Adapter☆3,737Updated last year
- Image to prompt with BLIP and CLIP☆2,886Updated last year
- Official repo for consistency models.☆6,398Updated last year
- Open-Set Grounded Text-to-Image Generation☆2,152Updated last year
- WebUI extension for ControlNet☆17,777Updated last year
- Official implementation of "Composer: Creative and Controllable Image Synthesis with Composable Conditions"☆1,561Updated last year
- Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI…☆6,794Updated last year
- Inpaint anything using Segment Anything and inpainting models.☆7,375Updated last year
- Nightly release of ControlNet 1.1☆5,077Updated last year
- LAVIS - A One-stop Library for Language-Vision Intelligence☆10,858Updated 9 months ago
- Implementation of GigaGAN, new SOTA GAN out of Adobe. Culmination of nearly a decade of research into GANs☆1,916Updated 7 months ago
- ☆1,478Updated last year
- Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion.☆8,705Updated last year
- [ICCV 2023] Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation☆4,354Updated last year
- Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts☆4,621Updated 11 months ago
- The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.☆6,193Updated last year
- [ICCV 2023 Oral] Text-to-Image Diffusion Models are Zero-Shot Video Generators☆4,210Updated 2 years ago
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆5,446Updated last year
- [NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"☆4,700Updated last year
- High-Resolution Image Synthesis with Latent Diffusion Models☆13,229Updated last year
- Unofficial implementation of "Prompt-to-Prompt Image Editing with Cross Attention Control" with Stable Diffusion☆1,340Updated 2 years ago