nerdyrodent / CLIP-Guided-DiffusionLinks
Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.
☆387Updated 2 years ago
Alternatives and similar repositories for CLIP-Guided-Diffusion
Users that are interested in CLIP-Guided-Diffusion are comparing it to the libraries listed below
Sorting:
- A CLI tool/python module for generating images from text using guided diffusion and CLIP from OpenAI.☆462Updated 3 years ago
- v objective diffusion inference code for PyTorch.☆718Updated 2 years ago
- ☆351Updated 3 years ago
- Modifications of the official PyTorch implementation of StyleGAN3. Let's easily generate images and videos with StyleGAN2/2-ADA/3!☆246Updated 5 months ago
- 1.4B latent diffusion model fine tuning☆265Updated 3 years ago
- VQGAN+CLIP Colab Notebook with user-friendly interface.☆231Updated 2 years ago
- ☆150Updated last year
- StyleGAN2-ADA - Official PyTorch implementation☆249Updated last year
- Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt☆137Updated last year
- Majesty Diffusion by @Dango233(@Dango233max) and @apolinario (@multimodalart)☆276Updated 2 years ago
- A collection of Jupyter notebooks to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.☆211Updated 3 years ago
- ☆275Updated 2 years ago
- Neural style transfer in PyTorch.☆486Updated 2 years ago
- Local image generation using VQGAN-CLIP or CLIP guided diffusion☆102Updated 2 years ago
- CLIP + FFT/DWT/RGB = text to image/video☆787Updated 3 months ago
- A collection of pretrained models for StyleGAN3☆292Updated 2 years ago
- Dataset of prompts, synthetic AI generated images, and aesthetic ratings.☆412Updated 2 years ago
- ☆198Updated 3 years ago
- combination of OpenAI GLIDE and Latent Diffusion☆135Updated 3 years ago
- ☆135Updated last year
- ☆1,033Updated 2 years ago
- Optimization based style transfer☆254Updated last year
- Here is a collection of checkpoints for DALLE-pytorch models, from where you can keep on training or start generating images.☆146Updated 2 years ago
- Home of `erlich` and `ongo`. Finetune latent-diffusion/glid-3-xl text2image on your own data.☆180Updated 2 years ago
- Styled text-to-drawing synthesis method. Featured at IJCAI 2022 and the 2021 NeurIPS Workshop on Machine Learning for Creativity and Desi…☆280Updated 2 years ago
- Implements VQGAN+CLIP for image and video generation, and style transfers, based on text and image prompts. Emphasis on ease-of-use, docu…☆113Updated 3 years ago
- Pytorch implementation of Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors☆336Updated 2 years ago
- StyleGAN2 for practice☆171Updated last year
- Implementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch☆548Updated 2 years ago
- CLIP + VQGAN / PixelDraw☆284Updated 3 years ago