jaycheney / Awesome-CLIP-Few-Shot-Learning
This repository lists some awesome public projects about Zero-shot/Few-shot Learning based on CLIP (Contrastive Language-Image Pre-Training).
☆22Updated 4 months ago
Alternatives and similar repositories for Awesome-CLIP-Few-Shot-Learning:
Users that are interested in Awesome-CLIP-Few-Shot-Learning are comparing it to the libraries listed below
- CVPR2024: Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models☆69Updated 8 months ago
- The official code for "TextRefiner: Internal Visual Feature as Efficient Refiner for Vision-Language Models Prompt Tuning" | [AAAI2025]☆34Updated 3 weeks ago
- ☆45Updated 11 months ago
- ☆24Updated last year
- [CVPR 2024] Offical implemention of the paper "DePT: Decoupled Prompt Tuning"☆95Updated 4 months ago
- Official pytorch implementation of ZiRa, a method for incremental vision language object detection (IVLOD),which has been accepted by Neu…☆23Updated 5 months ago
- [CVPR2024] Simple Semantic-Aided Few-Shot Learning☆38Updated 7 months ago
- Pytorch Implementation for CVPR 2024 paper: Learn to Rectify the Bias of CLIP for Unsupervised Semantic Segmentation☆37Updated last month
- Official code for ICCV 2023 paper, "Improving Zero-Shot Generalization for CLIP with Synthesized Prompts"☆99Updated last year
- Task Residual for Tuning Vision-Language Models (CVPR 2023)☆72Updated last year
- Pytorch implementation of "Learning Domain-Aware Detection Head with Prompt Tuning" (NeurIPS 2023)☆20Updated last year
- [AAAI 2024] TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training☆80Updated last year
- Official Repository for CVPR 2024 Paper: "Large Language Models are Good Prompt Learners for Low-Shot Image Classification"☆33Updated 9 months ago
- [ICLR'24] Consistency-guided Prompt Learning for Vision-Language Models☆69Updated 10 months ago
- [CVPR 2024] Official Repository for "Efficient Test-Time Adaptation of Vision-Language Models"☆84Updated 8 months ago
- Code for Label Propagation for Zero-shot Classification with Vision-Language Models (CVPR2024)☆36Updated 8 months ago
- [CVPR-2024] Official implementations of CLIP-KD: An Empirical Study of CLIP Model Distillation☆104Updated 8 months ago
- ☆26Updated last year
- 🔥MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition [Official, ICCV 2023]☆30Updated 5 months ago
- [CVPR'24] Validation-free few-shot adaptation of CLIP, using a well-initialized Linear Probe (ZSLP) and class-adaptive constraints (CLAP)…☆69Updated 10 months ago
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆56Updated 2 months ago
- ☆85Updated last year
- Code and Dataset for the paper "LAMM: Label Alignment for Multi-Modal Prompt Learning" AAAI 2024☆32Updated last year
- [NeurIPS 2023] Meta-Adapter☆48Updated last year
- Adaptation of vision-language models (CLIP) to downstream tasks using local and global prompts.☆37Updated 5 months ago
- This repository is a collection of awesome things about vision prompts, including papers, code, etc.☆34Updated last year
- ☆33Updated last year
- CLIP-Mamba: CLIP Pretrained Mamba Models with OOD and Hessian Evaluation☆70Updated 7 months ago
- cliptrase☆34Updated 7 months ago
- Official Pytorch implementation of "E2VPT: An Effective and Efficient Approach for Visual Prompt Tuning". (ICCV2023)☆68Updated last year