Zasder3 / CLIP-Style-TransferLinks
Doing style transfer with linguistic features using OpenAI's CLIP.
☆13Updated 4 years ago
Alternatives and similar repositories for CLIP-Style-Transfer
Users that are interested in CLIP-Style-Transfer are comparing it to the libraries listed below
Sorting:
- ☆14Updated 3 years ago
- Repository for Fantastic Style Channels and Where to Find Them: A Submodular Framework for Discovering Diverse Directions in GANs☆28Updated 3 years ago
- Google Colab notebooks☆43Updated 8 months ago
- Navigating StyleGAN2 w latent space using CLIP☆56Updated 3 years ago
- Unified API to facilitate usage of pre-trained "perceptor" models, a la CLIP☆38Updated 2 years ago
- Finetune the 1.4B latent diffusion text2img-large checkpoint from CompVis using deepspeed. (work-in-progress)☆36Updated 3 years ago
- Training simple models to predict CLIP image embeddings from text embeddings, and vice versa.☆60Updated 3 years ago
- checkpoints for glide finetuned on laion and other datasets. wip.☆50Updated 2 years ago
- Neural style transfer☆21Updated 3 years ago
- Script and models for clustering LAION-400m CLIP embeddings.☆26Updated 3 years ago
- Refactoring dalle-pytorch and taming-transformers for TPU VM☆60Updated 3 years ago
- Majesty Diffusion by @Dango233 and @apolinario (@multimodalart)☆25Updated 2 years ago
- [ICLR'23] Code to reproduce the results in the paper "PandA: Unsupervised Learning of Parts and Appearances in the Feature Maps of GANs"☆58Updated last year
- Implementation of Taming Transformers for High-Resolution Image Synthesis (https://arxiv.org/abs/2012.09841) in PyTorch☆16Updated 4 years ago
- StyleGAN2 - Official TensorFlow Implementation with practical improvements☆11Updated 5 years ago
- Authors official PyTorch implementation of the "ContraCLIP: Interpretable GAN generation driven by pairs of contrasting sentences".☆42Updated 2 years ago
- ☆27Updated 3 years ago
- PyTorch implementation of Contrastive Feature Loss for Image Prediction (AIM Workshop at ICCV 2021)☆53Updated 3 years ago
- ☆26Updated 3 years ago
- ☆34Updated 2 years ago
- ☆19Updated 3 years ago
- ☆13Updated 4 years ago
- Generate images from texts. In Russian☆19Updated 3 years ago
- Slight modifications to the official StyleGAN2 implementation☆49Updated 4 years ago
- This repo contains various notebooks of expriment with clip performed by different peoples☆26Updated 4 years ago
- jupyter/colab implementation of stable-diffusion using k_lms sampler, cpu draw manual seeding, and quantize.py fix☆38Updated 2 years ago
- ☆22Updated 3 years ago
- Inverts CLIP text embeds to image embeds and visualizes with deep-image-prior.☆35Updated 2 years ago
- Train a StyleGAN2-ADA model on Colaboratory to generate Steam banners.☆32Updated last year
- A Real-ESRGAN equipped Colab notebook for CLIP Guided Diffusion☆82Updated 3 years ago