robobeebop / VQGAN-CLIP-Video
Traditional deepdream with VQGAN+CLIP and optical flow. Ready to use in Google Colab.
☆22Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for VQGAN-CLIP-Video
- ☆133Updated last year
- A notebook for text-based guided image generation using StyleGANXL and CLIP.☆56Updated last year
- Wiggle animation keyframe creator☆40Updated 2 years ago
- ☆83Updated 2 years ago
- A Real-ESRGAN equipped Colab notebook for CLIP Guided Diffusion☆81Updated 2 years ago
- Implements VQGAN+CLIP for image and video generation, and style transfers, based on text and image prompts. Emphasis on ease-of-use, docu…☆113Updated 2 years ago
- Official PyTorch implementation of StyleGAN3☆96Updated 2 years ago
- AnimationKit: AI Upscaling & Interpolation using Real-ESRGAN+RIFE☆116Updated 2 years ago
- Google Colab notebook for NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.☆18Updated 2 years ago
- Start here☆110Updated 9 months ago
- StarGAN2 for practice☆94Updated last year
- Create audio reactive videos from stylegan2-ada-pytorch pre-trained networks.☆50Updated 2 years ago
- Deep learning toolkit for image, video, and audio synthesis☆108Updated last year
- Local image generation using VQGAN-CLIP or CLIP guided diffusion☆101Updated 2 years ago
- Use Runway's Stable-diffusion inpainting model to create an infinite loop video. Inspired by https://twitter.com/matthen2/status/15646087…☆49Updated 2 years ago
- High-Resolution Image Synthesis with Latent Diffusion Models