MarcLafon / gallop
Adaptation of vision-language models (CLIP) to downstream tasks using local and global prompts.
☆38Updated 6 months ago
Alternatives and similar repositories for gallop:
Users that are interested in gallop are comparing it to the libraries listed below
- ☆45Updated last year
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆62Updated 2 months ago
- Code and Dataset for the paper "LAMM: Label Alignment for Multi-Modal Prompt Learning" AAAI 2024☆32Updated last year
- ☆15Updated last year
- [NeurIPS 2023] Meta-Adapter☆48Updated last year
- The official implementation of CVPR 24' Paper "Learning Transferable Negative Prompts for Out-of-Distribution Detection"☆53Updated last year
- [NeurIPS2023] LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning☆92Updated 8 months ago
- ☆24Updated last year
- Pytorch implementation of "Test-time Adaption against Multi-modal Reliability Bias".☆34Updated 3 months ago
- The official code for "TextRefiner: Internal Visual Feature as Efficient Refiner for Vision-Language Models Prompt Tuning" | [AAAI2025]☆35Updated last month
- Official code for ICCV 2023 paper, "Improving Zero-Shot Generalization for CLIP with Synthesized Prompts"☆99Updated last year
- [CVPR2024] Simple Semantic-Aided Few-Shot Learning☆40Updated 7 months ago
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆160Updated last year
- [ICLR 2025] Official Implementation of Local-Prompt: Extensible Local Prompts for Few-Shot Out-of-Distribution Detection☆22Updated last week
- This repository is a collection of awesome things about vision prompts, including papers, code, etc.☆34Updated last year
- ☆10Updated last year
- 🔥MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition [Official, ICCV 2023]☆30Updated 5 months ago
- CoLeCLIP: Open-Domain Continual Learning via Joint Task Prompt and Vocabulary Learning☆14Updated last year
- ☆13Updated last month
- Code release for Proto-CLIP: Vision-Language Prototypical Network for Few-Shot Learning☆42Updated 3 months ago
- [CVPR 2024] Offical implemention of the paper "DePT: Decoupled Prompt Tuning"☆97Updated 5 months ago
- (CVPR2024 Highlight) Novel Class Discovery for Ultra-Fine-Grained Visual Categorization (UFG-NCD)☆18Updated 9 months ago
- This is a summary of research on noisy correspondence. There may be omissions. If anything is missing please get in touch with us. Our em…☆48Updated 3 weeks ago
- Official Repository for CVPR 2024 Paper: "Large Language Models are Good Prompt Learners for Low-Shot Image Classification"☆34Updated 9 months ago
- [CVPR 2024] Official Repository for "Efficient Test-Time Adaptation of Vision-Language Models"☆86Updated 9 months ago
- [NeurIPS 2024] Code for Dual Prototype Evolving for Test-Time Generalization of Vision-Language Models☆37Updated last month
- Official implementation of the "Multimodal Parameter-Efficient Few-Shot Class Incremental Learning" paper☆22Updated last year
- About Code Release for "CLIPood: Generalizing CLIP to Out-of-Distributions" (ICML 2023), https://arxiv.org/abs/2302.00864☆65Updated last year
- [CVPR 2024] PriViLege: Pre-trained Vision and Language Transformers Are Few-Shot Incremental Learners☆50Updated 7 months ago
- [ICLR'24] Consistency-guided Prompt Learning for Vision-Language Models☆72Updated 10 months ago