☆576Jul 19, 2022Updated 3 years ago
Alternatives and similar repositories for CLIP-Adapter
Users that are interested in CLIP-Adapter are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆666Nov 28, 2023Updated 2 years ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆2,190May 20, 2024Updated last year
- [CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners☆381Jun 1, 2023Updated 2 years ago
- [CVPR 2023] Official repository of paper titled "MaPLe: Multi-modal Prompt Learning".☆808Jul 24, 2023Updated 2 years ago
- A PyTorch toolbox for domain generalization, domain adaptation and semi-supervised learning.☆1,419Nov 3, 2023Updated 2 years ago
- [ICCV 2023] Code for "Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement"☆149Apr 21, 2024Updated last year
- ☆199May 10, 2023Updated 2 years ago
- ☆106Dec 7, 2023Updated 2 years ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆211Dec 18, 2022Updated 3 years ago
- [ICCV 2023] Prompt-aligned Gradient for Prompt Tuning☆169Jul 15, 2023Updated 2 years ago
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆1,213Sep 2, 2023Updated 2 years ago
- Robust fine-tuning of zero-shot models☆760Apr 29, 2022Updated 3 years ago
- ☆59Feb 20, 2022Updated 4 years ago
- Task Residual for Tuning Vision-Language Models (CVPR 2023)☆77May 27, 2023Updated 2 years ago
- [CVPR 2022] DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting☆545Sep 15, 2023Updated 2 years ago
- Cross-modal few-shot adaptation with CLIP☆351Apr 29, 2025Updated 10 months ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆223Dec 16, 2022Updated 3 years ago
- [AAAI 2023] Zero-Shot Enhancement of CLIP with Parameter-free Attention☆93Apr 29, 2023Updated 2 years ago
- This repo is the official implementation of UPL (Unsupervised Prompt Learning for Vision-Language Models).☆118Apr 1, 2022Updated 3 years ago
- ☆61May 2, 2025Updated 10 months ago
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆174Dec 14, 2023Updated 2 years ago
- SVL-Adapter: Self-Supervised Adapter for Vision-Language Pretrained Models☆21Jan 11, 2024Updated 2 years ago
- [CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners☆45Jun 14, 2023Updated 2 years ago
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆286Sep 28, 2023Updated 2 years ago
- An open source implementation of CLIP.☆13,528Mar 12, 2026Updated last week
- [ICLR'24] Consistency-guided Prompt Learning for Vision-Language Models☆86May 24, 2024Updated last year
- The multi-view version of MonoDETR on nuScenes dataset☆21Nov 4, 2022Updated 3 years ago
- ☆27Mar 20, 2023Updated 3 years ago
- CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image☆32,861Feb 18, 2026Updated last month
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆677Sep 19, 2022Updated 3 years ago
- Code for the paper: "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models" [ICCV'23]☆105Aug 22, 2023Updated 2 years ago
- Test-time Prompt Tuning (TPT) for zero-shot generalization in vision-language models (NeurIPS 2022))☆207Oct 21, 2022Updated 3 years ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,230Jun 28, 2024Updated last year
- Grounded Language-Image Pre-training☆2,585Jan 24, 2024Updated 2 years ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆808Mar 20, 2024Updated 2 years ago
- ☆193Oct 22, 2022Updated 3 years ago
- ☆95Sep 23, 2023Updated 2 years ago
- [CVPR 2022] PointCLIP: Point Cloud Understanding by CLIP☆408Nov 24, 2022Updated 3 years ago
- Implementation for "DualCoOp: Fast Adaptation to Multi-Label Recognition with Limited Annotations" (NeurIPS 2022))☆71Oct 24, 2023Updated 2 years ago