timothybrooks / instruct-pix2pixLinks
☆6,812Updated last year
Alternatives and similar repositories for instruct-pix2pix
Users that are interested in instruct-pix2pix are comparing it to the libraries listed below
Sorting:
- Using Low-rank adaptation to quickly fine-tune diffusion models.☆7,453Updated last year
- ☆3,394Updated last year
- Official repo for consistency models.☆6,422Updated last year
- Image to prompt with BLIP and CLIP☆2,911Updated last year
- Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)☆1,970Updated last year
- Edit anything in images powered by segment-anything, ControlNet, StableDiffusion, etc. (ACM MM)☆3,414Updated 8 months ago
- Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion.☆8,741Updated last year
- T2I-Adapter☆3,751Updated last year
- ☆3,036Updated 2 years ago
- Karras et al. (2022) diffusion models for PyTorch☆2,512Updated 9 months ago
- Let us control diffusion models!☆33,187Updated last year
- Implementation of GigaGAN, new SOTA GAN out of Adobe. Culmination of nearly a decade of research into GANs☆1,922Updated 9 months ago
- Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI…☆6,843Updated last year
- Nightly release of ControlNet 1.1☆5,089Updated last year
- Official implementation of "Composer: Creative and Controllable Image Synthesis with Composable Conditions"☆1,559Updated last year
- ☆7,839Updated last year
- Open-Set Grounded Text-to-Image Generation☆2,165Updated last year
- Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts☆4,620Updated last year
- [ICCV 2023] Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation☆4,360Updated last year
- pix2pix3D: Generating 3D Objects from 2D User Inputs☆1,712Updated 2 years ago
- Implementation of Make-A-Video, new SOTA text to video generator from Meta AI, in Pytorch☆1,983Updated last year
- VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models☆4,973Updated last year
- Kandinsky 2 — multilingual text2image latent diffusion model☆2,808Updated last year
- The image prompt adapter is designed to enable a pretrained text-to-image diffusion model to generate images with image prompt.☆6,265Updated last year
- Unofficial implementation of "Prompt-to-Prompt Image Editing with Cross Attention Control" with Stable Diffusion☆1,338Updated 3 years ago
- [ICCV 2023 Oral] Text-to-Image Diffusion Models are Zero-Shot Video Generators☆4,216Updated 2 years ago
- ☆1,476Updated last year
- Inpaint anything using Segment Anything and inpainting models.☆7,449Updated last year
- Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and …☆17,021Updated last year
- Zero-shot Image-to-Image Translation [SIGGRAPH 2023]☆1,132Updated last year