JoaoLages / diffusers-interpretLinks
Diffusers-Interpret π€π§¨π΅οΈββοΈ: Model explainability for π€ Diffusers. Get explanations for your generated images.
β280Updated 3 years ago
Alternatives and similar repositories for diffusers-interpret
Users that are interested in diffusers-interpret are comparing it to the libraries listed below
Sorting:
- 1.4B latent diffusion model fine tuningβ265Updated 3 years ago
- β355Updated 3 years ago
- v objective diffusion inference code for JAX.β214Updated 3 years ago
- β108Updated 3 years ago
- β160Updated 3 years ago
- Dataset of prompts, synthetic AI generated images, and aesthetic ratings.β424Updated 3 years ago
- Diffusion attentive attribution maps for interpreting Stable Diffusion.β785Updated last year
- Benchmarking Generative Models with Artworksβ235Updated 3 years ago
- Official Implementation of Paella https://arxiv.org/abs/2211.07292v2β748Updated 2 years ago
- β645Updated 2 years ago
- β274Updated 3 years ago
- [ECCV 2022] Compositional Generation using Diffusion Modelsβ484Updated 8 months ago
- A CLI tool/python module for generating images from text using guided diffusion and CLIP from OpenAI.β462Updated 3 years ago
- Pytorch implementation of Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priorsβ338Updated 3 years ago
- Diffusion Reading Group at EleutherAIβ334Updated 2 years ago
- Implementation of Parti, Google's pure attention-based text-to-image neural network, in Pytorchβ537Updated 2 years ago
- MinImagen: A minimal implementation of the Imagen text-to-image modelβ310Updated 2 years ago
- Get hundred of million of image+url from the crawling at home dataset and preprocess themβ223Updated last year
- β150Updated 2 years ago
- Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input promptβ140Updated last year
- stable diffusion trainingβ295Updated 3 years ago
- Playing around with stable diffusion. Generated images are reproducible because I save the metadata and latent information. You can generβ¦β206Updated 3 years ago
- v objective diffusion inference code for PyTorch.β717Updated 3 years ago
- CLOOB Conditioned Latent Diffusion training and inference codeβ111Updated 3 years ago
- Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesisβ320Updated 2 years ago
- Here is a collection of checkpoints for DALLE-pytorch models, from where you can keep on training or start generating images.β146Updated 3 years ago
- Implementation of Paint-with-words with Stable Diffusion : method from eDiff-I that let you generate image from text-labeled segmentationβ¦β645Updated 2 years ago
- combination of OpenAI GLIDE and Latent Diffusionβ135Updated 3 years ago
- Course content and resources for the AIAIART course.β570Updated 3 years ago
- Finetune glide-text2im from openai on your own data.β88Updated 2 months ago