galatolofederico / clip-glassLinks
Repository for "Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search"
☆179Updated 3 years ago
Alternatives and similar repositories for clip-glass
Users that are interested in clip-glass are comparing it to the libraries listed below
Sorting:
- ☆198Updated 3 years ago
- Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt☆137Updated last year
- ☆351Updated 3 years ago
- ☆151Updated last year
- ImageBART: Bidirectional Context with Multinomial Diffusion for Autoregressive Image Synthesis☆125Updated 3 years ago
- A CLI tool/python module for generating images from text using guided diffusion and CLIP from OpenAI.☆462Updated 3 years ago
- ☆160Updated 3 years ago
- Pytorch implementation of Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors☆336Updated 2 years ago
- Refactoring dalle-pytorch and taming-transformers for TPU VM☆60Updated 3 years ago
- Here is a collection of checkpoints for DALLE-pytorch models, from where you can keep on training or start generating images.☆146Updated 2 years ago
- This is a summary of easily available datasets for generalized DALLE-pytorch training.☆128Updated 3 years ago
- Finetune glide-text2im from openai on your own data.☆89Updated 2 years ago
- [ICCV 2021] Aligning Latent and Image Spaces to Connect the Unconnectable☆259Updated 4 years ago
- v objective diffusion inference code for JAX.☆214Updated 3 years ago
- ☆111Updated 3 years ago
- code for CLIPDraw☆139Updated 3 years ago
- CLOOB Conditioned Latent Diffusion training and inference code☆113Updated 3 years ago
- Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic☆276Updated 2 years ago
- Code for paper LAFITE: Towards Language-Free Training for Text-to-Image Generation (CVPR 2022)☆183Updated 2 years ago
- Using CLIP and StyleGAN to generate faces from prompts.☆131Updated 3 years ago
- A CLI tool for using GLIDE to generate images from text.☆68Updated 3 years ago
- ☆234Updated 2 years ago
- Navigating StyleGAN2 w latent space using CLIP☆56Updated 3 years ago
- 1.4B latent diffusion model fine tuning☆265Updated 3 years ago
- A collection of Jupyter notebooks to play with NVIDIA's StyleGAN3 and OpenAI's CLIP for a text-based guided image generation.☆211Updated 3 years ago
- L-Verse: Bidirectional Generation Between Image and Text☆108Updated 2 months ago
- Learning to ground explanations of affect for visual art.☆316Updated 4 years ago
- Using pretrained encoder and language models to generate captions from multimedia inputs.☆97Updated 2 years ago
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆392Updated 2 years ago
- A Real-ESRGAN equipped Colab notebook for CLIP Guided Diffusion☆83Updated 3 years ago