sMamooler / CLIP_Explainability
code for studying OpenAI's CLIP explainability
☆29Updated 3 years ago
Alternatives and similar repositories for CLIP_Explainability:
Users that are interested in CLIP_Explainability are comparing it to the libraries listed below
- Visual self-questioning for large vision-language assistant.☆40Updated 4 months ago
- (CVPR2024) MeaCap: Memory-Augmented Zero-shot Image Captioning☆43Updated 6 months ago
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆32Updated 11 months ago
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆26Updated 9 months ago
- [NeurIPS 2023] Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization☆103Updated last year
- Official code repository for "Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning" (published at ICLR 202…☆58Updated last year
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)☆67Updated 2 weeks ago
- [CVPR' 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆45Updated 6 months ago
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆80Updated 11 months ago
- [CVPR 2024] Retrieval-Augmented Image Captioning with External Visual-Name Memory for Open-World Comprehension☆45Updated 10 months ago
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆51Updated 3 weeks ago
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆154Updated last year
- ☆89Updated last year
- [ICLR'24] Consistency-guided Prompt Learning for Vision-Language Models☆66Updated 8 months ago
- Code for the paper: "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models" [ICCV'23]☆97Updated last year
- Composed Video Retrieval☆49Updated 9 months ago
- ☆34Updated last year
- 【ICLR 2024, Spotlight】Sentence-level Prompts Benefit Composed Image Retrieval☆75Updated 10 months ago
- Awesome List of Vision Language Prompt Papers☆41Updated last year
- NegCLIP.☆30Updated 2 years ago
- Code and results accompanying our paper titled CHiLS: Zero-Shot Image Classification with Hierarchical Label Sets☆55Updated last year
- [ICCV 2023] ALIP: Adaptive Language-Image Pre-training with Synthetic Caption☆97Updated last year
- [AAAI 2024] TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training☆78Updated last year
- ☆74Updated last year
- Task Residual for Tuning Vision-Language Models (CVPR 2023)☆68Updated last year
- Context-I2W: Mapping Images to Context-dependent words for Accurate Zero-Shot Composed Image Retrieval [AAAI 2024 Oral]☆46Updated 2 months ago
- ☆35Updated 2 years ago
- [ICLR 2025] VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning☆43Updated 2 weeks ago
- [CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"☆66Updated 4 months ago