mehdidc / feed_forward_vqgan_clipView external linksLinks
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
☆140Jan 3, 2024Updated 2 years ago
Alternatives and similar repositories for feed_forward_vqgan_clip
Users that are interested in feed_forward_vqgan_clip are comparing it to the libraries listed below
Sorting:
- ☆20Aug 19, 2021Updated 4 years ago
- Training simple models to predict CLIP image embeddings from text embeddings, and vice versa.☆60Mar 31, 2022Updated 3 years ago
- Majesty Diffusion by @Dango233 and @apolinario (@multimodalart)☆25Jul 26, 2022Updated 3 years ago
- RUDOLPH: One Hyper-Tasking Transformer can be creative as DALL-E and GPT-3 and smart as CLIP☆254Feb 6, 2023Updated 3 years ago
- ☆30Nov 25, 2021Updated 4 years ago
- ☆64Nov 4, 2021Updated 4 years ago
- Script and models for clustering LAION-400m CLIP embeddings.☆26Jan 10, 2022Updated 4 years ago
- checkpoints for glide finetuned on laion and other datasets. wip.☆50Aug 17, 2022Updated 3 years ago
- A CLIP conditioned Decision Transformer.☆22Jul 14, 2021Updated 4 years ago
- Visual search interface☆11Nov 30, 2021Updated 4 years ago
- neural image generation☆401Dec 23, 2021Updated 4 years ago
- ☆48Aug 2, 2021Updated 4 years ago
- v objective diffusion inference code for PyTorch.☆718Nov 29, 2022Updated 3 years ago
- A CLI tool/python module for generating images from text using guided diffusion and CLIP from OpenAI.☆460Dec 31, 2025Updated last month
- Styled text-to-drawing synthesis method. Featured at IJCAI 2022 and the 2021 NeurIPS Workshop on Machine Learning for Creativity and Desi…☆283Nov 15, 2022Updated 3 years ago
- Contrastive Language-Image Pretraining☆143Sep 6, 2022Updated 3 years ago
- v objective diffusion inference code for JAX.☆215Apr 14, 2022Updated 3 years ago
- jax version of clip guided diffusion scripts☆90Jan 11, 2024Updated 2 years ago
- ☆160Jun 13, 2022Updated 3 years ago
- combination of OpenAI GLIDE and Latent Diffusion☆136Apr 7, 2022Updated 3 years ago
- CLIP + VQGAN / PixelDraw☆284Dec 6, 2021Updated 4 years ago
- ☆354May 10, 2022Updated 3 years ago
- Neural style transfer☆21Jul 29, 2021Updated 4 years ago
- Deep learning toolkit for image, video, and audio synthesis☆107Dec 22, 2022Updated 3 years ago
- A CLI tool for using GLIDE to generate images from text.☆67May 5, 2022Updated 3 years ago
- Doing style transfer with linguistic features using OpenAI's CLIP.☆14May 4, 2021Updated 4 years ago
- ☆34Jul 28, 2022Updated 3 years ago
- Home of `erlich` and `ongo`. Finetune latent-diffusion/glid-3-xl text2image on your own data.☆181Aug 5, 2022Updated 3 years ago
- ImageBART: Bidirectional Context with Multinomial Diffusion for Autoregressive Image Synthesis☆125Mar 14, 2022Updated 3 years ago
- 1.4B latent diffusion model fine tuning☆265May 16, 2022Updated 3 years ago
- Majesty Diffusion by @Dango233(@Dango233max) and @apolinario (@multimodalart)☆276Jul 25, 2022Updated 3 years ago
- Image restoration with neural networks but without learning.☆46May 12, 2022Updated 3 years ago
- Unified API to facilitate usage of pre-trained "perceptor" models, a la CLIP☆39Nov 26, 2022Updated 3 years ago
- Colab notebook to finetune GLIDE.☆12Mar 22, 2022Updated 3 years ago
- ☆14Feb 24, 2021Updated 4 years ago
- L-Verse: Bidirectional Generation Between Image and Text☆107Apr 1, 2025Updated 10 months ago
- ☆195Dec 7, 2021Updated 4 years ago
- A simple library that implements CLIP guided loss in PyTorch.☆77Dec 25, 2021Updated 4 years ago
- Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.☆385Aug 29, 2022Updated 3 years ago