muzairkhattak / multimodal-prompt-learning
[CVPR 2023] Official repository of paper titled "MaPLe: Multi-modal Prompt Learning".
☆684Updated last year
Alternatives and similar repositories for multimodal-prompt-learning:
Users that are interested in multimodal-prompt-learning are comparing it to the libraries listed below
- ☆480Updated 2 years ago
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆1,064Updated last year
- ☆564Updated last year
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆1,815Updated 6 months ago
- A curated list of prompt-based paper in computer vision and vision-language learning.☆904Updated 11 months ago
- A curated list of awesome prompt/adapter learning methods for vision-language models like CLIP.☆337Updated this week
- This repo lists relevant papers summarized in our survey paper: A Systematic Survey of Prompt Engineering on Vision-Language Foundation …☆405Updated last month
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆393Updated 2 months ago
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆242Updated last year
- [CVPR 2024] Official PyTorch Code for "PromptKD: Unsupervised Prompt Distillation for Vision-Language Models"☆249Updated last week
- A Survey on multimodal learning research.☆320Updated last year
- Recent Advances in Vision and Language Pre-training (VLP)☆289Updated last year
- [CVPR 2022] DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting☆523Updated last year
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆737Updated 4 months ago
- Exploring Visual Prompts for Adapting Large-Scale Models☆269Updated 2 years ago
- [NeurIPS 2022] Implementation of "AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition"☆330Updated 2 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆640Updated 2 years ago
- ☆472Updated last month
- Code for ALBEF: a new vision-language pre-training method☆1,583Updated 2 years ago
- Cross-modal few-shot adaptation with CLIP☆325Updated 9 months ago
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆262Updated 11 months ago
- [MIR-2023-Survey] A continuously updated paper list for multi-modal pre-trained big models☆282Updated 2 weeks ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆847Updated 6 months ago
- 📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).☆499Updated last week
- Multimodal Prompting with Missing Modalities for Visual Recognition, CVPR'23☆182Updated last year
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆724Updated 8 months ago
- [ICLR'23] AIM: Adapting Image Models for Efficient Video Action Recognition☆278Updated last year
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,150Updated 5 months ago
- code for TCL: Vision-Language Pre-Training with Triple Contrastive Learning, CVPR 2022☆262Updated 2 months ago
- Cross-Modal Implicit Relation Reasoning and Aligning for Text-to-Image Person Retrieval (CVPR 2023)☆210Updated 8 months ago