jaycheney / Awesome-CLIP-Few-Shot-LearningLinks
This repository lists some awesome public projects about Zero-shot/Few-shot Learning based on CLIP (Contrastive Language-Image Pre-Training).
☆26Updated 9 months ago
Alternatives and similar repositories for Awesome-CLIP-Few-Shot-Learning
Users that are interested in Awesome-CLIP-Few-Shot-Learning are comparing it to the libraries listed below
Sorting:
- The official code for "TextRefiner: Internal Visual Feature as Efficient Refiner for Vision-Language Models Prompt Tuning" | [AAAI2025]☆44Updated 6 months ago
- [CVPR2024] Simple Semantic-Aided Few-Shot Learning☆48Updated last year
- CLIP-Mamba: CLIP Pretrained Mamba Models with OOD and Hessian Evaluation☆76Updated last year
- [CVPR 2024] Offical implemention of the paper "DePT: Decoupled Prompt Tuning"☆107Updated 3 months ago
- ☆50Updated last year
- [AAAI 2024] TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training☆101Updated last year
- Official pytorch implementation of ZiRa, a method for incremental vision language object detection (IVLOD),which has been accepted by Neu…☆26Updated 10 months ago
- ☆26Updated last year
- Source code of the paper Fine-Grained Visual Classification via Internal Ensemble Learning Transformer☆52Updated last year
- CVPR2024: Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models☆83Updated last year
- Official Repository for CVPR 2024 Paper: "Large Language Models are Good Prompt Learners for Low-Shot Image Classification"☆38Updated last year
- Official Pytorch implementation of "E2VPT: An Effective and Efficient Approach for Visual Prompt Tuning". (ICCV2023)☆71Updated last year
- [CVPR 2025] Official PyTorch Code for "MMRL: Multi-Modal Representation Learning for Vision-Language Models" and its extension "MMRL++: P…☆74Updated 2 months ago
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆275Updated last year
- [ICLR'24] Consistency-guided Prompt Learning for Vision-Language Models☆81Updated last year
- [CVPR-2024] Official implementations of CLIP-KD: An Empirical Study of CLIP Model Distillation☆125Updated 3 weeks ago
- Pytorch implementation of "Learning Domain-Aware Detection Head with Prompt Tuning" (NeurIPS 2023)☆20Updated last year
- [ICLR2024 Spotlight] Code Release of CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction☆192Updated last year
- A curated list of papers, datasets and resources pertaining to zero-shot object detection.☆27Updated 2 years ago
- Official code for ICCV 2023 paper, "Improving Zero-Shot Generalization for CLIP with Synthesized Prompts"☆101Updated last year
- [TMM 2023] Self-paced Curriculum Adapting of CLIP for Visual Grounding.☆130Updated last month
- [CVPR'24] Validation-free few-shot adaptation of CLIP, using a well-initialized Linear Probe (ZSLP) and class-adaptive constraints (CLAP)…☆76Updated 3 months ago
- [CVPR 2024] Official PyTorch Code for "PromptKD: Unsupervised Prompt Distillation for Vision-Language Models"☆327Updated 3 weeks ago
- [AAAI'25, CVPRW 2024] Official repository of paper titled "Learning to Prompt with Text Only Supervision for Vision-Language Models".☆113Updated 9 months ago
- [ICCV 2025] Official PyTorch Code for "Advancing Textual Prompt Learning with Anchored Attributes"☆97Updated last week
- This repository is a collection of awesome things about vision prompts, including papers, code, etc.☆36Updated last year
- Official Implementation of "Read-only Prompt Optimization for Vision-Language Few-shot Learning", ICCV 2023☆54Updated 2 years ago
- ☆79Updated last year
- ☆27Updated last year
- 🔥MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition [Official, ICCV 2023]☆30Updated 10 months ago