nerdyrodent / CLIP-Guided-Diffusion
Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.
☆384Updated 2 years ago
Alternatives and similar repositories for CLIP-Guided-Diffusion:
Users that are interested in CLIP-Guided-Diffusion are comparing it to the libraries listed below
- A CLI tool/python module for generating images from text using guided diffusion and CLIP from OpenAI.☆461Updated 2 years ago
- ☆350Updated 2 years ago
- v objective diffusion inference code for PyTorch.☆716Updated 2 years ago
- VQGAN+CLIP Colab Notebook with user-friendly interface.☆230Updated 2 years ago
- Pretrained Dalle2 from laion☆501Updated last year
- 1.4B latent diffusion model fine tuning☆263Updated 2 years ago
- Modifications of the official PyTorch implementation of StyleGAN3. Let's easily generate images and videos with StyleGAN2/2-ADA/3!☆240Updated last month
- Pytorch implementation of Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors☆334Updated 2 years ago
- Majesty Diffusion by @Dango233(@Dango233max) and @apolinario (@multimodalart)☆276Updated 2 years ago
- Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt☆137Updated last year
- A collection of Jupyter notebooks to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.☆210Updated 2 years ago
- Dataset of prompts, synthetic AI generated images, and aesthetic ratings.☆404Updated 2 years ago
- ☆199Updated 3 years ago
- Styled text-to-drawing synthesis method. Featured at IJCAI 2022 and the 2021 NeurIPS Workshop on Machine Learning for Creativity and Desi…☆280Updated 2 years ago
- A collection of pretrained models for StyleGAN3☆287Updated 2 years ago
- StyleGAN2-ADA - Official PyTorch implementation☆244Updated 8 months ago
- ☆151Updated last year
- Implementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch☆546Updated 2 years ago
- ☆275Updated 2 years ago
- combination of OpenAI GLIDE and Latent Diffusion☆136Updated 2 years ago
- ☆134Updated last year
- Home of `erlich` and `ongo`. Finetune latent-diffusion/glid-3-xl text2image on your own data.☆182Updated 2 years ago
- Local image generation using VQGAN-CLIP or CLIP guided diffusion☆102Updated 2 years ago
- stable diffusion training☆291Updated 2 years ago
- [SIGGRAPH'22] StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets☆971Updated 6 months ago
- ☆238Updated 2 years ago
- ☆1,160Updated 2 years ago
- [ECCV 2022] Compositional Generation using Diffusion Models☆460Updated 4 months ago
- Implements VQGAN+CLIP for image and video generation, and style transfers, based on text and image prompts. Emphasis on ease-of-use, docu…☆113Updated 2 years ago