justinjohn0306 / VQGAN-CLIPLinks
VQGAN+CLIP Colab Notebook with user-friendly interface.
☆231Updated 3 years ago
Alternatives and similar repositories for VQGAN-CLIP
Users that are interested in VQGAN-CLIP are comparing it to the libraries listed below
Sorting:
- Local image generation using VQGAN-CLIP or CLIP guided diffusion☆102Updated 2 years ago
- ☆135Updated last year
- A collection of Jupyter notebooks to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.☆211Updated 3 years ago
- Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.☆387Updated 2 years ago
- This is the repo for my experiments with StyleGAN2. There are many like it, but this one is mine. Contains code for the paper Audio-react…☆179Updated 3 years ago
- Start here☆110Updated last year
- Majesty Diffusion by @Dango233(@Dango233max) and @apolinario (@multimodalart)☆276Updated 2 years ago
- StyleGAN2 for practice☆171Updated last year
- Python text to image☆118Updated last year
- ☆83Updated 2 years ago
- A CLI tool/python module for generating images from text using guided diffusion and CLIP from OpenAI.☆462Updated 3 years ago
- neural image generation☆403Updated 3 years ago
- A Real-ESRGAN equipped Colab notebook for CLIP Guided Diffusion☆83Updated 3 years ago
- AnimationKit: AI Upscaling & Interpolation using Real-ESRGAN+RIFE☆119Updated 3 years ago
- StyleGAN2 with adaptive discriminator augmentation (ADA) - Official TensorFlow implementation☆92Updated 4 years ago
- ☆351Updated 3 years ago
- Official PyTorch implementation of StyleGAN3☆96Updated 2 years ago
- A collection of pretrained models for StyleGAN3☆293Updated 2 years ago
- Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt☆137Updated last year
- CLIP + VQGAN / PixelDraw☆284Updated 3 years ago
- Multiple notebooks which allow the use of various machine learning methods to generate or modify multimedia content☆179Updated last year
- Create audio reactive videos from stylegan2-ada-pytorch pre-trained networks.☆51Updated 3 years ago
- CLIP + FFT/DWT/RGB = text to image/video☆787Updated 4 months ago
- Refactor of the Deforum Stable Diffusion notebook (featuring video_init) https://colab.research.google.com/github/deforum/stable-diffusio…☆107Updated 2 years ago
- Google Colab notebook for NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.☆18Updated 3 years ago
- ☆283Updated 3 years ago
- A CLI tool for using GLIDE to generate images from text.☆68Updated 3 years ago
- This code extends the neural style transfer image processing technique to video by generating smooth transitions between several referenc…☆168Updated 3 years ago
- Neural style transfer in PyTorch.☆486Updated 2 years ago
- Implements VQGAN+CLIP for image and video generation, and style transfers, based on text and image prompts. Emphasis on ease-of-use, docu…☆113Updated 3 years ago