anguyen8 / gScoreCAM
☆51Updated 8 months ago
Alternatives and similar repositories for gScoreCAM:
Users that are interested in gScoreCAM are comparing it to the libraries listed below
- Code and results accompanying our paper titled CHiLS: Zero-Shot Image Classification with Hierarchical Label Sets☆56Updated last year
- ☆36Updated 2 months ago
- official implementation of "Interpreting CLIP's Image Representation via Text-Based Decomposition"☆197Updated 4 months ago
- ICCV 2023: CLIPN for Zero-Shot OOD Detection: Teaching CLIP to Say No☆135Updated last year
- Official repository for the ICCV 2023 paper: "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts…☆56Updated last year
- ☆57Updated last year
- Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning☆150Updated 2 years ago
- [NeurIPS 2023] Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization☆103Updated last year
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆159Updated last year
- Code for Finetune like you pretrain: Improved finetuning of zero-shot vision models☆98Updated last year
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆45Updated 7 months ago
- Plotting heatmaps with the self-attention of the [CLS] tokens in the last layer.☆44Updated 2 years ago
- Code implementation of our NeurIPS 2023 paper: Vocabulary-free Image Classification☆107Updated last year
- [ICLR 2023] Official code repository for "Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning"☆59Updated last year
- Sparse Linear Concept Embeddings☆85Updated 7 months ago
- ☆62Updated last year
- code for studying OpenAI's CLIP explainability☆30Updated 3 years ago
- [ICLR 2024] Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models.☆73Updated 8 months ago
- ☆184Updated last year
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆271Updated last year
- Learning Bottleneck Concepts in Image Classification (CVPR 2023)☆36Updated last year
- S-CLIP: Semi-supervised Vision-Language Pre-training using Few Specialist Captions☆47Updated last year
- Augmenting with Language-guided Image Augmentation (ALIA)☆75Updated last year
- Learning to compose soft prompts for compositional zero-shot learning.☆88Updated last year
- Implementation for "DualCoOp: Fast Adaptation to Multi-Label Recognition with Limited Annotations" (NeurIPS 2022))☆58Updated last year
- ☆30Updated 3 months ago
- (ICLR 2023) Official PyTorch implementation of "What Do Self-Supervised Vision Transformers Learn?"☆106Updated last year
- This repository contains the code and datasets for our ICCV-W paper 'Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts…☆28Updated last year
- ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers for Interpretable Image Recognition☆36Updated 2 years ago
- [NeurIPS2023] Official implementation and model release of the paper "What Makes Good Examples for Visual In-Context Learning?"☆173Updated last year