muzairkhattak / multimodal-prompt-learningLinks
[CVPR 2023] Official repository of paper titled "MaPLe: Multi-modal Prompt Learning".
☆750Updated last year
Alternatives and similar repositories for multimodal-prompt-learning
Users that are interested in multimodal-prompt-learning are comparing it to the libraries listed below
Sorting:
- ☆531Updated 2 years ago
- ☆618Updated last year
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆1,110Updated last year
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆1,971Updated last year
- [CVPR 2024] Official PyTorch Code for "PromptKD: Unsupervised Prompt Distillation for Vision-Language Models"☆310Updated 2 months ago
- A curated list of prompt-based paper in computer vision and vision-language learning.☆921Updated last year
- A curated list of awesome prompt/adapter learning methods for vision-language models like CLIP.☆550Updated last week
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆401Updated 8 months ago
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆265Updated last year
- A Survey on multimodal learning research.☆328Updated last year
- This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation …☆466Updated 2 months ago
- Exploring Visual Prompts for Adapting Large-Scale Models☆280Updated 2 years ago
- [CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners☆370Updated 2 years ago
- [ICLR'23] AIM: Adapting Image Models for Efficient Video Action Recognition☆292Updated last year
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆821Updated 10 months ago
- Cross-modal few-shot adaptation with CLIP☆339Updated last month
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆280Updated last year
- [CVPR 2022] DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting☆535Updated last year
- Recent Advances in Vision and Language Pre-training (VLP)☆293Updated last year
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆656Updated 2 years ago
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆425Updated 2 years ago
- [NeurIPS 2022] Implementation of "AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition"☆359Updated 2 years ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆769Updated last year
- CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks☆422Updated 3 months ago
- Code for ALBEF: a new vision-language pre-training method☆1,661Updated 2 years ago
- [TPAMI] Searching prompt modules for parameter-efficient transfer learning.☆231Updated last year
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆165Updated last year
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆477Updated 2 years ago
- ☆191Updated 2 years ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,199Updated 11 months ago