zer0int / CLIP-ViT-visualizationLinks
What do CLIP Vision Transformers learn? Feature Visualization can show you!
☆14Updated last year
Alternatives and similar repositories for CLIP-ViT-visualization
Users that are interested in CLIP-ViT-visualization are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] Official PyTorch implementation of "ECLIPSE: Revisiting the Text-to-Image Prior for Efficient Image Generation"☆64Updated last year
- Official codebase for Margin-aware Preference Optimization for Aligning Diffusion Models without Reference (MaPO).☆81Updated last year
- Implementation for "Correcting Diffusion Generation through Resampling" [CVPR 2024]☆32Updated last year
- Code and Data for Paper: SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data☆35Updated last year
- Gradient-Free Textual Inversion for Personalized Text-to-Image Generation☆43Updated 2 years ago
- ☆27Updated last year
- Code for the paper "Manipulating Embeddings of Stable Diffusion Prompts".☆15Updated last year
- TerDiT: Ternary Diffusion Models with Transformers☆71Updated last year
- Diffusion attentive attribution maps for interpreting Stable Diffusion for image-to-image attention.☆55Updated last month
- Code for our papers : "Generating images of rare concepts using pre-trained diffusion models" (AAAI 24) and "Norm-guided latent space exp…☆85Updated last year
- Official implementation of UniCtrl: Improving the Spatiotemporal Consistency of Text-to-Video Diffusion Models via Training-Free Unified …☆70Updated 11 months ago
- ☆41Updated last year
- ☆32Updated 11 months ago
- [ICML2025] LoRA fine-tune directly on the quantized models.☆36Updated 11 months ago
- ☆72Updated 2 years ago
- Extend BoxDiff to SDXL (SDXL-based layout-to-image generation)