jeongukjae / CLIP-self-attention-visualizationLinks
Plotting heatmaps with the self-attention of the [CLS] tokens in the last layer.
☆49Updated 3 years ago
Alternatives and similar repositories for CLIP-self-attention-visualization
Users that are interested in CLIP-self-attention-visualization are comparing it to the libraries listed below
Sorting:
- Task Residual for Tuning Vision-Language Models (CVPR 2023)☆75Updated 2 years ago
- Code implementation of our NeurIPS 2023 paper: Vocabulary-free Image Classification☆107Updated last year
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆53Updated 8 months ago
- [NeurIPS2023] Official implementation and model release of the paper "What Makes Good Examples for Visual In-Context Learning?"☆179Updated last year
- [ICLR 2024] Official repository for "Vision-by-Language for Training-Free Compositional Image Retrieval"☆83Updated last year
- ☆199Updated 2 years ago
- [NeurIPS 2023] Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization☆110Updated last year
- [ICCV 2023] ALIP: Adaptive Language-Image Pre-training with Synthetic Caption☆103Updated 2 years ago
- An Enhanced CLIP Framework for Learning with Synthetic Captions☆39Updated 8 months ago
- Code and Models for "GeneCIS A Benchmark for General Conditional Image Similarity"☆61Updated 2 years ago
- [CVPR 2023 & IJCV 2025] Positive-Augmented Contrastive Learning for Image and Video Captioning Evaluation☆64Updated 5 months ago
- [ICLR 2024] Test-Time RL with CLIP Feedback for Vision-Language Models.☆97Updated 2 months ago
- [BMVC 2023] Zero-shot Composed Text-Image Retrieval☆54Updated last year
- Official Implementation of "Read-only Prompt Optimization for Vision-Language Few-shot Learning", ICCV 2023☆55Updated 2 years ago
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆30Updated last year
- LLaVA-NeXT-Image-Llama3-Lora, Modified from https://github.com/arielnlee/LLaVA-1.6-ft☆45Updated last year
- [CVPR2023] The code for 《Position-guided Text Prompt for Vision-Language Pre-training》☆151Updated 2 years ago
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆287Updated last year
- [ICCV 2023] Prompt-aligned Gradient for Prompt Tuning☆167Updated 2 years ago
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆136Updated 7 months ago
- ☆59Updated 2 years ago
- Code for the paper: "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models" [ICCV'23]☆106Updated 2 years ago
- Compress conventional Vision-Language Pre-training data☆53Updated 2 years ago
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆109Updated 7 months ago
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆39Updated last year
- code for studying OpenAI's CLIP explainability☆37Updated 3 years ago
- 📍 Official pytorch implementation of paper "ProtoCLIP: Prototypical Contrastive Language Image Pretraining" (IEEE TNNLS)☆53Updated 2 years ago
- [CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Prompt…☆46Updated last year
- [ICLR 23] Contrastive Aligned of Vision to Language Through Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- Visual self-questioning for large vision-language assistant.☆45Updated 5 months ago