rkhamilton / vqgan-clip-generator
Implements VQGAN+CLIP for image and video generation, and style transfers, based on text and image prompts. Emphasis on ease-of-use, documentation, and smooth video creation.
☆113Updated 3 years ago
Alternatives and similar repositories for vqgan-clip-generator:
Users that are interested in vqgan-clip-generator are comparing it to the libraries listed below
- ☆134Updated last year
- ☆83Updated 2 years ago
- A Real-ESRGAN equipped Colab notebook for CLIP Guided Diffusion☆83Updated 3 years ago
- Local image generation using VQGAN-CLIP or CLIP guided diffusion☆102Updated 2 years ago
- Deep learning toolkit for image, video, and audio synthesis☆108Updated 2 years ago
- AnimationKit: AI Upscaling & Interpolation using Real-ESRGAN+RIFE☆118Updated 3 years ago
- Start here☆110Updated last year
- High-Resolution Image Synthesis with Latent Diffusion Models☆61Updated 2 years ago
- A notebook for text-based guided image generation using StyleGANXL and CLIP.☆58Updated last year
- Official PyTorch implementation of StyleGAN3☆25Updated 3 years ago
- ☆86Updated last year
- Refactor of the Deforum Stable Diffusion notebook (featuring video_init) https://colab.research.google.com/github/deforum/stable-diffusio…☆105Updated 2 years ago
- StarGAN2 for practice☆95Updated last month
- A collection of pretrained models for StyleGAN3☆289Updated 2 years ago
- Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.☆385Updated 2 years ago
- Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt☆137Updated last year
- combination of OpenAI GLIDE and Latent Diffusion☆136Updated 2 years ago
- openai guided diffusion tweaks☆52Updated 2 years ago
- Generates a random prompt for a VQGAN+CLIP☆34Updated 2 years ago
- Home of `erlich` and `ongo`. Finetune latent-diffusion/glid-3-xl text2image on your own data.☆182Updated 2 years ago
- ☆18Updated 2 years ago
- Modifications of the official PyTorch implementation of StyleGAN3. Let's easily generate images and videos with StyleGAN2/2-ADA/3!☆242Updated 3 months ago
- Use Runway's Stable-diffusion inpainting model to create an infinite loop video. Inspired by https://twitter.com/matthen2/status/15646087…☆49Updated 2 years ago
- A collection of Jupyter notebooks to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.☆210Updated 2 years ago
- Google Colab notebook for NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.☆18Updated 2 years ago
- "Interactive Video Stylization Using Few-Shot Patch-Based Training" by O. Texler et al. in PyTorch Lightning☆69Updated 3 years ago
- Traditional deepdream with VQGAN+CLIP and optical flow. Ready to use in Google Colab.☆23Updated 2 years ago
- Official PyTorch implementation of StyleGAN3☆96Updated 2 years ago
- FILM: Frame Interpolation for Large Motion, In arXiv 2022.☆29Updated 3 years ago
- Create audio reactive videos from stylegan2-ada-pytorch pre-trained networks.☆51Updated 2 years ago