☆574Jul 19, 2022Updated 3 years ago
Alternatives and similar repositories for CLIP-Adapter
Users that are interested in CLIP-Adapter are comparing it to the libraries listed below
Sorting:
- ☆661Nov 28, 2023Updated 2 years ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆2,182May 20, 2024Updated last year
- [CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners☆381Jun 1, 2023Updated 2 years ago
- [CVPR 2023] Official repository of paper titled "MaPLe: Multi-modal Prompt Learning".☆808Jul 24, 2023Updated 2 years ago
- A PyTorch toolbox for domain generalization, domain adaptation and semi-supervised learning.☆1,416Nov 3, 2023Updated 2 years ago
- [ICCV 2023] Code for "Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement"☆148Apr 21, 2024Updated last year
- ☆105Dec 7, 2023Updated 2 years ago
- ☆200May 10, 2023Updated 2 years ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆209Dec 18, 2022Updated 3 years ago
- [ICCV 2023] Prompt-aligned Gradient for Prompt Tuning☆168Jul 15, 2023Updated 2 years ago
- Robust fine-tuning of zero-shot models☆760Apr 29, 2022Updated 3 years ago
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆1,215Sep 2, 2023Updated 2 years ago
- [CVPR 2022] DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting☆544Sep 15, 2023Updated 2 years ago
- Task Residual for Tuning Vision-Language Models (CVPR 2023)☆76May 27, 2023Updated 2 years ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆224Dec 16, 2022Updated 3 years ago
- ☆61May 2, 2025Updated 10 months ago
- [AAAI 2023] Zero-Shot Enhancement of CLIP with Parameter-free Attention☆93Apr 29, 2023Updated 2 years ago
- ☆59Feb 20, 2022Updated 4 years ago
- Cross-modal few-shot adaptation with CLIP☆350Apr 29, 2025Updated 10 months ago
- This repo is the official implementation of UPL (Unsupervised Prompt Learning for Vision-Language Models).☆118Apr 1, 2022Updated 3 years ago
- [CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners☆44Jun 14, 2023Updated 2 years ago
- SVL-Adapter: Self-Supervised Adapter for Vision-Language Pretrained Models☆21Jan 11, 2024Updated 2 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆675Sep 19, 2022Updated 3 years ago
- [ICLR'24] Consistency-guided Prompt Learning for Vision-Language Models☆84May 24, 2024Updated last year
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆175Dec 14, 2023Updated 2 years ago
- An open source implementation of CLIP.☆13,430Updated this week
- Grounded Language-Image Pre-training☆2,575Jan 24, 2024Updated 2 years ago
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆285Sep 28, 2023Updated 2 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆421Oct 28, 2022Updated 3 years ago
- Test-time Prompt Tuning (TPT) for zero-shot generalization in vision-language models (NeurIPS 2022))☆208Oct 21, 2022Updated 3 years ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆807Mar 20, 2024Updated last year
- ☆26Mar 20, 2023Updated 2 years ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,232Jun 28, 2024Updated last year
- ☆193Oct 22, 2022Updated 3 years ago
- CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image☆32,642Feb 18, 2026Updated 2 weeks ago
- Code for the paper: "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models" [ICCV'23]☆106Aug 22, 2023Updated 2 years ago
- ☆95Sep 23, 2023Updated 2 years ago
- Official PyTorch implementation of "Extract Free Dense Labels from CLIP" (ECCV 22 Oral)☆471Sep 19, 2022Updated 3 years ago
- [ICCV 2023 & AAAI 2023] Binary Adapters & FacT, [Tech report] Convpass☆198Aug 1, 2023Updated 2 years ago