cvl-umass / AdaptCLIPZSLinks
☆42Updated 7 months ago
Alternatives and similar repositories for AdaptCLIPZS
Users that are interested in AdaptCLIPZS are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2023] Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization☆106Updated last year
- LaFTer: Label-Free Tuning of Zero-shot Classifier using Language and Unlabeled Image Collections (NeurIPS 2023)☆29Updated last year
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆169Updated last year
- [CVPR'24] Validation-free few-shot adaptation of CLIP, using a well-initialized Linear Probe (ZSLP) and class-adaptive constraints (CLAP)…☆75Updated 2 months ago
- Code for Label Propagation for Zero-shot Classification with Vision-Language Models (CVPR2024)☆40Updated last year
- Official code for ICCV 2023 paper, "Improving Zero-Shot Generalization for CLIP with Synthesized Prompts"☆100Updated last year
- Code and results accompanying our paper titled CHiLS: Zero-Shot Image Classification with Hierarchical Label Sets☆57Updated 2 years ago
- This repository contains the code and datasets for our ICCV-W paper 'Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts…☆29Updated last year
- [ICLR 2023] Official code repository for "Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning"☆59Updated 2 years ago
- [NeurIPS 2024] WATT: Weight Average Test-Time Adaptation of CLIP☆53Updated 11 months ago
- [ICLR 2024] Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models.☆85Updated last year
- [ICLR'24] Consistency-guided Prompt Learning for Vision-Language Models☆80Updated last year
- Official Repository for CVPR 2024 Paper: "Large Language Models are Good Prompt Learners for Low-Shot Image Classification"☆37Updated last year
- Code implementation of our NeurIPS 2023 paper: Vocabulary-free Image Classification☆107Updated last year
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆50Updated 4 months ago
- ☆54Updated last year
- ICCV 2023: CLIPN for Zero-Shot OOD Detection: Teaching CLIP to Say No☆139Updated last year
- [CVPR 2024] Official Repository for "Efficient Test-Time Adaptation of Vision-Language Models"