zer0int / CLIP-ViT-visualizationLinks
What do CLIP Vision Transformers learn? Feature Visualization can show you!
☆13Updated last year
Alternatives and similar repositories for CLIP-ViT-visualization
Users that are interested in CLIP-ViT-visualization are comparing it to the libraries listed below
Sorting:
- Official codebase for Margin-aware Preference Optimization for Aligning Diffusion Models without Reference (MaPO).☆82Updated last year
- [CVPR 2024] Official PyTorch implementation of "ECLIPSE: Revisiting the Text-to-Image Prior for Efficient Image Generation"☆65Updated last year
- Code and Data for Paper: SELMA: Learning and Merging Skill-Specific Text-to-Image Experts with Auto-Generated Data☆34Updated last year
- Implementation for "Correcting Diffusion Generation through Resampling" [CVPR 2024]☆33Updated last year
- ☆28Updated last year
- Code for the paper "Manipulating Embeddings of Stable Diffusion Prompts".☆14Updated last year
- [AAAI 2025] Does VLM Classification Benefit from LLM Description Semantics?☆22Updated last month
- The official repository of paper "ScaleLong: Towards More Stable Training of Diffusion Model via Scaling Network Long Skip Connection" (N…☆50Updated last year
- CLIP Guided Diffusion☆69Updated last year
- [ICML2025] LoRA fine-tune directly on the quantized models.☆35Updated 9 months ago
- Gradient-Free Textual Inversion for Personalized Text-to-Image Generation☆43Updated 2 years ago
- ☆32Updated 10 months ago
- ☆70Updated 10 months ago
- Extend BoxDiff to SDXL (SDXL-based layout-to-image generation)☆24Updated last year
- This repository includes the official implementation of our paper "Grouping First, Attending Smartly: Training-Free Acceleration for Diff…☆52Updated 4 months ago
- Sparse Autoencoders for Stable Diffusion XL models.☆69Updated last month
- Official implementation of "Art-Free Generative Models: Art Creation Without Graphic Art Knowledge"☆31Updated 5 months ago
- Official implementation of UniCtrl: Improving the Spatiotemporal Consistency of Text-to-Video Diffusion Models via Training-Free Unified …☆69Updated 9 months ago
- Training code for CLIP-FlanT5☆29Updated last year
- ☆73Updated 2 years ago
- [ICLR 2025] Official PyTorch implmentation of paper "T-Stitch: Accelerating Sampling in Pre-trained Diffusion Models with Trajectory Stit…☆103Updated last year
- Public code release for the paper "ProCreate, Don’t Reproduce! Propulsive Energy Diffusion for Creative Generation"☆41Updated 2 months ago
- A demo for the Direct Ascent Synthesis: Hidden Generative Capabilities in Discriminative Models paper (https://arxiv.org/abs/2502.07753)☆40Updated 6 months ago
- ☆10Updated last year
- Diffusion attentive attribution maps for interpreting Stable Diffusion for image-to-image attention.☆55Updated 8 months ago
- Code for our papers : "Generating images of rare concepts using pre-trained diffusion models" (AAAI 24) and "Norm-guided latent space exp…☆86Updated last year
- ☆40Updated last year
- TerDiT: Ternary Diffusion Models with Transformers☆71Updated last year
- ☆56Updated last month
- DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models☆46Updated last year