gaopengcuhk / Tip-AdapterView external linksLinks
☆661Nov 28, 2023Updated 2 years ago
Alternatives and similar repositories for Tip-Adapter
Users that are interested in Tip-Adapter are comparing it to the libraries listed below
Sorting:
- ☆572Jul 19, 2022Updated 3 years ago
- [CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners☆381Jun 1, 2023Updated 2 years ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆2,173May 20, 2024Updated last year
- [ICCV 2023] Code for "Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement"☆148Apr 21, 2024Updated last year
- ☆200May 10, 2023Updated 2 years ago
- Code for the paper: "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models" [ICCV'23]☆106Aug 22, 2023Updated 2 years ago
- [CVPR 2023] Official repository of paper titled "MaPLe: Multi-modal Prompt Learning".☆803Jul 24, 2023Updated 2 years ago
- [AAAI 2023] Zero-Shot Enhancement of CLIP with Parameter-free Attention☆93Apr 29, 2023Updated 2 years ago
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆1,214Sep 2, 2023Updated 2 years ago
- Cross-modal few-shot adaptation with CLIP☆349Apr 29, 2025Updated 9 months ago
- [CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners☆44Jun 14, 2023Updated 2 years ago
- [CVPR 2022] DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting☆542Sep 15, 2023Updated 2 years ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆209Dec 18, 2022Updated 3 years ago
- Task Residual for Tuning Vision-Language Models (CVPR 2023)☆76May 27, 2023Updated 2 years ago
- Official code for ICCV 2023 paper, "Improving Zero-Shot Generalization for CLIP with Synthesized Prompts"☆103Mar 6, 2024Updated last year
- ☆105Dec 7, 2023Updated 2 years ago
- A PyTorch toolbox for domain generalization, domain adaptation and semi-supervised learning.☆1,415Nov 3, 2023Updated 2 years ago
- [ICCV 2023] Prompt-aligned Gradient for Prompt Tuning☆167Jul 15, 2023Updated 2 years ago
- Official code for ICLR 2024 paper, "A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation"☆85Apr 21, 2024Updated last year
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆284Sep 28, 2023Updated 2 years ago
- Exploring Visual Prompts for Adapting Large-Scale Models☆287Jun 6, 2022Updated 3 years ago
- [CVPR'24] Validation-free few-shot adaptation of CLIP, using a well-initialized Linear Probe (ZSLP) and class-adaptive constraints (CLAP)…☆80Jun 7, 2025Updated 8 months ago
- [NeurIPS 2023] Meta-Adapter☆48Nov 21, 2023Updated 2 years ago
- SVL-Adapter: Self-Supervised Adapter for Vision-Language Pretrained Models☆21Jan 11, 2024Updated 2 years ago
- Grounded Language-Image Pre-training☆2,573Jan 24, 2024Updated 2 years ago
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆175Dec 14, 2023Updated 2 years ago
- ☆95Sep 23, 2023Updated 2 years ago
- Test-time Prompt Tuning (TPT) for zero-shot generalization in vision-language models (NeurIPS 2022))☆207Oct 21, 2022Updated 3 years ago
- ☆61May 2, 2025Updated 9 months ago
- ☆175Dec 29, 2023Updated 2 years ago
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆224Dec 16, 2022Updated 3 years ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆807Mar 20, 2024Updated last year
- A curated list of prompt-based paper in computer vision and vision-language learning.☆928Dec 18, 2023Updated 2 years ago
- [ICCV 2023 & AAAI 2023] Binary Adapters & FacT, [Tech report] Convpass☆198Aug 1, 2023Updated 2 years ago
- This repo is the official implementation of UPL (Unsupervised Prompt Learning for Vision-Language Models).☆118Apr 1, 2022Updated 3 years ago
- [CVPR 2024] Official Repository for "Efficient Test-Time Adaptation of Vision-Language Models"☆114Jul 15, 2024Updated last year
- An open source implementation of CLIP.☆13,353Nov 4, 2025Updated 3 months ago
- [CVPR2023] The code for 《Position-guided Text Prompt for Vision-Language Pre-training》☆151Jun 7, 2023Updated 2 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆673Sep 19, 2022Updated 3 years ago