ZjjConan / VLM-MultiModalAdapterLinks
The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".
☆82Updated 6 months ago
Alternatives and similar repositories for VLM-MultiModalAdapter
Users that are interested in VLM-MultiModalAdapter are comparing it to the libraries listed below
Sorting:
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆277Updated 2 years ago
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models☆53Updated last year
- [AAAI'25, CVPRW 2024] Official repository of paper titled "Learning to Prompt with Text Only Supervision for Vision-Language Models".☆113Updated 10 months ago
- [ICCV 2025] Official PyTorch Code for "Advancing Textual Prompt Learning with Anchored Attributes"☆102Updated last week
- [CVPR 2024] Official Repository for "Efficient Test-Time Adaptation of Vision-Language Models"☆108Updated last year
- [ICLR'24] Consistency-guided Prompt Learning for Vision-Language Models☆82Updated last year
- ☆50Updated last year
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)☆74Updated 8 months ago
- Official code for ICCV 2023 paper, "Improving Zero-Shot Generalization for CLIP with Synthesized Prompts"☆102Updated last year
- [NeurIPS2023] LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning☆100Updated 3 months ago
- ☆27Updated last year
- [CVPR 2024] Offical implemention of the paper "DePT: Decoupled Prompt Tuning"☆108Updated 4 months ago
- CVPR2024: Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models☆85Updated last year
- Easy wrapper for inserting LoRA layers in CLIP.☆40Updated last year
- [NeurIPS 2024] Code for Dual Prototype Evolving for Test-Time Generalization of Vision-Language Models☆43Updated 7 months ago
- ☆48Updated 8 months ago
- Adaptation of vision-language models (CLIP) to downstream tasks using local and global prompts.☆49Updated 3 months ago
- Code for paper "Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters" CVPR2024☆253Updated last month
- The official implementation of CVPR 24' Paper "Learning Transferable Negative Prompts for Out-of-Distribution Detection"☆58Updated last year
- FineCLIP: Self-distilled Region-based CLIP for Better Fine-grained Understanding (NIPS24)☆31Updated last month
- Awsome of VLM-CL. Continual Learning for VLMs: A Survey and Taxonomy Beyond Forgetting☆97Updated last week
- [ICLR 2024] Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models.☆93Updated last year
- Pytorch implementation of "Test-time Adaption against Multi-modal Reliability Bias".☆42Updated 10 months ago
- ☆103Updated last year
- [ICLR 2025] - Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality Inversion☆54Updated 6 months ago
- ☆57Updated 2 weeks ago
- The official code for "TextRefiner: Internal Visual Feature as Efficient Refiner for Vision-Language Models Prompt Tuning" | [AAAI2025]☆45Updated 7 months ago
- Official Repository for CVPR 2024 Paper: "Large Language Models are Good Prompt Learners for Low-Shot Image Classification"☆38Updated last year
- ☆22Updated last year
- Official implementation of ResCLIP: Residual Attention for Training-free Dense Vision-language Inference☆47Updated 7 months ago