justinjohn0306 / VQGAN-CLIP
VQGAN+CLIP Colab Notebook with user-friendly interface.
☆231Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for VQGAN-CLIP
- Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.☆382Updated 2 years ago
- Local image generation using VQGAN-CLIP or CLIP guided diffusion☆102Updated 2 years ago
- ☆133Updated last year
- A collection of pretrained models for StyleGAN3☆284Updated 2 years ago
- Start here☆110Updated 9 months ago
- This is the repo for my experiments with StyleGAN2. There are many like it, but this one is mine. Contains code for the paper Audio-react…☆181Updated 3 years ago
- Python text to image☆117Updated 5 months ago
- AnimationKit: AI Upscaling & Interpolation using Real-ESRGAN+RIFE☆116Updated 2 years ago
- Implements VQGAN+CLIP for image and video generation, and style transfers, based on text and image prompts. Emphasis on ease-of-use, docu…☆113Updated 2 years ago
- CLIP + FFT/DWT/RGB = text to image/video☆777Updated 3 months ago
- Modifications of the official PyTorch implementation of StyleGAN3. Let's easily generate images and videos with StyleGAN2/2-ADA/3!☆238Updated 3 months ago
- A Real-ESRGAN equipped Colab notebook for CLIP Guided Diffusion☆81Updated 2 years ago
- A CLI tool/python module for generating images from text using guided diffusion and CLIP from OpenAI.☆460Updated 2 years ago
- neural image generation☆402Updated 2 years ago
- ☆350Updated 2 years ago
- A collection of Jupyter notebooks to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.☆207Updated 2 years ago
- combination of OpenAI GLIDE and Latent Diffusion☆136Updated 2 years ago
- Create audio reactive videos from stylegan2-ada-pytorch pre-trained networks.☆50Updated 2 years ago
- StyleGAN2-ADA - Official PyTorch implementation☆245Updated 6 months ago
- ☆83Updated 2 years ago
- 1.4B latent diffusion model fine tuning☆261Updated 2 years ago
- StyleGAN2 for practice☆172Updated last year
- Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt☆136Updated 10 months ago
- ☆283Updated 2 years ago
- v objective diffusion inference code for PyTorch.☆715Updated last year
- CLIP + VQGAN / PixelDraw☆282Updated 2 years ago
- Multiple notebooks which allow the use of various machine learning methods to generate or modify multimedia content☆178Updated last year
- Dataset of prompts, synthetic AI generated images, and aesthetic ratings.☆399Updated 2 years ago
- Official PyTorch implementation of StyleGAN3☆96Updated 2 years ago
- Traditional deepdream with VQGAN+CLIP and optical flow. Ready to use in Google Colab.☆22Updated 2 years ago