SelfishGene / visual_taste_approximatorLinks
Visual Taste Approximator (VTA) is a very simple tool that helps anyone create an automatic replica of themselves that can approximate their own personal visual taste
☆39Updated 2 years ago
Alternatives and similar repositories for visual_taste_approximator
Users that are interested in visual_taste_approximator are comparing it to the libraries listed below
Sorting:
- StyleGAN2-ADA trained on a dataset of 2000+ sneaker images☆21Updated 3 years ago
- Training simple models to predict CLIP image embeddings from text embeddings, and vice versa.☆60Updated 3 years ago
- checkpoints for glide finetuned on laion and other datasets. wip.☆50Updated 2 years ago
- ☆56Updated 2 years ago
- Unified API to facilitate usage of pre-trained "perceptor" models, a la CLIP☆38Updated 2 years ago
- openai guided diffusion tweaks☆52Updated 2 years ago
- High-Resolution Image Synthesis with Latent Diffusion Models☆61Updated 3 years ago
- cheap views of intermediate Stable Diffusion results☆46Updated 2 years ago
- Majesty Diffusion by @Dango233 and @apolinario (@multimodalart)☆25Updated 2 years ago
- Generate images from texts. In Russian☆19Updated 3 years ago
- jupyter/colab implementation of stable-diffusion using k_lms sampler, cpu draw manual seeding, and quantize.py fix☆38Updated 2 years ago
- Repository for Fantastic Style Channels and Where to Find Them: A Submodular Framework for Discovering Diverse Directions in GANs☆28Updated 3 years ago
- Authors official PyTorch implementation of the "ContraCLIP: Interpretable GAN generation driven by pairs of contrasting sentences".☆42Updated 2 years ago
- VQGAN+CLIP with some additional tuning. For notebooks and the command line.☆50Updated 3 years ago
- Finetune the 1.4B latent diffusion text2img-large checkpoint from CompVis using deepspeed. (work-in-progress)☆36Updated 3 years ago
- Inverts CLIP text embeds to image embeds and visualizes with deep-image-prior.☆35Updated 2 years ago
- Repo for structured dreaming☆55Updated 3 years ago
- stylegan3_blending☆39Updated 3 years ago
- Yet Another Diffusion Automation☆13Updated 2 years ago
- ☆17Updated 2 years ago
- Tools for smoothly interpolating between prompts for Stable Diffusion models☆58Updated 2 years ago
- Generate images from an initial frame and text☆37Updated last year
- A notebook for text-based guided image generation using StyleGANXL and CLIP.☆59Updated 2 years ago
- Neural style transfer☆21Updated 3 years ago
- GPU accelerated Perlin Noise in python☆9Updated 4 years ago
- Image restoration with neural networks but without learning.☆46Updated 3 years ago
- Floral Diffusion is a custom diffusion model trained by jags using a DD 5.6 version☆25Updated 2 years ago
- Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation"☆70Updated 3 years ago
- This repo contains various notebooks of expriment with clip performed by different peoples☆26Updated 4 years ago
- Repository with which to explore k-diffusion and diffusers, and within which changes to said packages may be tested.☆53Updated last year