jaycheney / Awesome-CLIP-Few-Shot-Learning
This repository lists some awesome public projects about Zero-shot/Few-shot Learning based on CLIP (Contrastive Language-Image Pre-Training).
☆23Updated 5 months ago
Alternatives and similar repositories for Awesome-CLIP-Few-Shot-Learning
Users that are interested in Awesome-CLIP-Few-Shot-Learning are comparing it to the libraries listed below
Sorting:
- ☆46Updated last year
- The official code for "TextRefiner: Internal Visual Feature as Efficient Refiner for Vision-Language Models Prompt Tuning" | [AAAI2025]☆37Updated 2 months ago
- [CVPR2024] Simple Semantic-Aided Few-Shot Learning☆40Updated 8 months ago
- CVPR2024: Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models☆72Updated 10 months ago
- [CVPR 2024] Offical implemention of the paper "DePT: Decoupled Prompt Tuning"☆100Updated 5 months ago
- Official code for ICCV 2023 paper, "Improving Zero-Shot Generalization for CLIP with Synthesized Prompts"☆101Updated last year
- Pytorch implementation of "Learning Domain-Aware Detection Head with Prompt Tuning" (NeurIPS 2023)☆20Updated last year
- ☆24Updated last year
- Official pytorch implementation of ZiRa, a method for incremental vision language object detection (IVLOD),which has been accepted by Neu…☆23Updated 6 months ago
- Official Repository for CVPR 2024 Paper: "Large Language Models are Good Prompt Learners for Low-Shot Image Classification"☆35Updated 10 months ago
- 🔥MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition [Official, ICCV 2023]☆30Updated 6 months ago
- [NeurIPS 2023] Meta-Adapter☆48Updated last year
- Code for Label Propagation for Zero-shot Classification with Vision-Language Models (CVPR2024)☆36Updated 9 months ago
- Pytorch Implementation for CVPR 2024 paper: Learn to Rectify the Bias of CLIP for Unsupervised Semantic Segmentation☆42Updated 3 weeks ago
- ☆25Updated last year
- CLIP-Mamba: CLIP Pretrained Mamba Models with OOD and Hessian Evaluation☆71Updated 9 months ago
- This repository is a collection of awesome things about vision prompts, including papers, code, etc.☆34Updated last year
- Code and Dataset for the paper "LAMM: Label Alignment for Multi-Modal Prompt Learning" AAAI 2024☆32Updated last year
- [AAAI 2024] TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training☆86Updated last year
- [NeurIPS2023] LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning☆95Updated 9 months ago
- Official Implementation of "Read-only Prompt Optimization for Vision-Language Few-shot Learning", ICCV 2023☆53Updated last year
- Pytorch source code of ESPT method in AAAI 2023☆23Updated last year
- [CVPR 2024] PriViLege: Pre-trained Vision and Language Transformers Are Few-Shot Incremental Learners☆50Updated 8 months ago
- [CVPR 2024] Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification☆31Updated last year
- ☆22Updated 8 months ago
- Implementation for "DualCoOp: Fast Adaptation to Multi-Label Recognition with Limited Annotations" (NeurIPS 2022))☆60Updated last year
- CoLeCLIP: Open-Domain Continual Learning via Joint Task Prompt and Vocabulary Learning☆14Updated last year
- ☆15Updated last year
- A curated list of papers, datasets and resources pertaining to zero-shot object detection.☆26Updated 2 years ago
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)☆71Updated 3 months ago