nerdyrodent / CLIP-Guided-Diffusion
Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.
☆386Updated 2 years ago
Alternatives and similar repositories for CLIP-Guided-Diffusion:
Users that are interested in CLIP-Guided-Diffusion are comparing it to the libraries listed below
- A CLI tool/python module for generating images from text using guided diffusion and CLIP from OpenAI.☆462Updated 3 years ago
- v objective diffusion inference code for PyTorch.☆716Updated 2 years ago
- ☆351Updated 2 years ago
- A collection of Jupyter notebooks to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.☆210Updated 3 years ago
- ☆275Updated 2 years ago
- ☆151Updated last year
- Here is a collection of checkpoints for DALLE-pytorch models, from where you can keep on training or start generating images.☆147Updated 2 years ago
- Pretrained Dalle2 from laion☆501Updated 2 years ago
- Styled text-to-drawing synthesis method. Featured at IJCAI 2022 and the 2021 NeurIPS Workshop on Machine Learning for Creativity and Desi…☆279Updated 2 years ago
- A collection of pretrained models for StyleGAN3☆290Updated 2 years ago
- Local image generation using VQGAN-CLIP or CLIP guided diffusion☆102Updated 2 years ago
- Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt☆137Updated last year
- 1.4B latent diffusion model fine tuning☆266Updated 2 years ago
- VQGAN+CLIP Colab Notebook with user-friendly interface.☆230Updated 2 years ago
- Dataset of prompts, synthetic AI generated images, and aesthetic ratings.☆412Updated 2 years ago
- ☆198Updated 3 years ago
- Majesty Diffusion by @Dango233(@Dango233max) and @apolinario (@multimodalart)☆276Updated 2 years ago
- ☆135Updated last year
- Home of `erlich` and `ongo`. Finetune latent-diffusion/glid-3-xl text2image on your own data.☆182Updated 2 years ago
- Modifications of the official PyTorch implementation of StyleGAN3. Let's easily generate images and videos with StyleGAN2/2-ADA/3!☆244Updated 4 months ago
- StyleGAN2-ADA - Official PyTorch implementation☆245Updated 11 months ago
- CLIP + FFT/DWT/RGB = text to image/video☆786Updated 2 months ago
- Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.☆2,648Updated 2 years ago
- Neural style transfer in PyTorch.☆482Updated last year
- ☆239Updated 3 years ago
- Pytorch implementation of Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors☆336Updated 2 years ago
- Benchmarking Generative Models with Artworks☆227Updated 2 years ago
- combination of OpenAI GLIDE and Latent Diffusion☆136Updated 3 years ago
- Using CLIP and StyleGAN to generate faces from prompts.☆131Updated 3 years ago
- Implements VQGAN+CLIP for image and video generation, and style transfers, based on text and image prompts. Emphasis on ease-of-use, docu…☆113Updated 3 years ago