minglllli / CLIPFitLinks
[EMNLP 2024] Implementation of vision-language model fine-tuning via simple parameter-efficient modification
☆17Updated last year
Alternatives and similar repositories for CLIPFit
Users that are interested in CLIPFit are comparing it to the libraries listed below
Sorting:
- [AAAI 2024] Prompt-based Distribution Alignment for Unsupervised Domain Adaptation☆78Updated last year
- PyTorch Implementation for InMaP☆11Updated 2 years ago
- [ICLR 2024 Spotlight] "Negative Label Guided OOD Detection with Pretrained Vision-Language Models"☆21Updated last year
- ☆39Updated 5 months ago
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆95Updated 9 months ago
- [NeurIPS 2024] Code for Dual Prototype Evolving for Test-Time Generalization of Vision-Language Models☆45Updated 10 months ago
- [CVPR 2024] Official Repository for "Efficient Test-Time Adaptation of Vision-Language Models"☆114Updated last year
- Code of ICLR 2025 paper "DynaPrompt: Dynamic Test-Time Prompt Tuning"☆21Updated last year
- Pytorch implementation of "Test-time Adaption against Multi-modal Reliability Bias".☆44Updated last year
- [ECCV 2024] Soft Prompt Generation for Domain Generalization☆30Updated last year
- [CVPR 2024] TEA: Test-time Energy Adaptation☆71Updated last year
- External Knowledge Injection for CLIP-Based Class-Incremental Learning (ICCV 2025)☆51Updated 2 months ago
- Code for our NeurIPS´24 paper☆38Updated last year
- [CVPR2025] The implementation of the paper "OODD: Test-time Out-of-Distribution Detection with Dynamic Dictionary".☆18Updated 8 months ago
- ☆21Updated last month
- Official PyTorch Implementation for Active Prompt Learning in Vision Language Models☆40Updated last year
- [ICLR 2024] ViDA: Homeostatic Visual Domain Adapter for Continual Test Time Adaptation☆71Updated last year
- [CVPR 2025] CL-MoE: Enhancing Multimodal Large Language Model with Dual Momentum Mixture-of-Experts for Continual Visual Question Answeri…☆47Updated 7 months ago
- Long-Tailed Visual Recognition via Self-Heterogeneous Integration with Knowledge Excavation (CVPR 2023)☆40Updated last year
- Collection of Unsupervised Learning Methods for Vision-Language Models (VLMs)☆80Updated this week
- ID-like Prompt Learning for Few-Shot Out-of-Distribution Detection☆26Updated last year
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models☆56Updated last year
- ☆56Updated last year
- [CVPR 2024] Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification☆39Updated last year
- Official implementations of our LaZSL (ICCV'25)☆39Updated 6 months ago
- ☆49Updated 11 months ago
- [CVPR 2024 Oral] Official code for LTGC: Long-Tail Recognition via Leveraging LLMs-driven Generated Content☆22Updated last year
- Adaptation of vision-language models (CLIP) to downstream tasks using local and global prompts.☆50Updated 6 months ago
- About Code Release for "CLIPood: Generalizing CLIP to Out-of-Distributions" (ICML 2023), https://arxiv.org/abs/2302.00864☆70Updated 2 years ago
- The official PyTorch implementation of CVPR2025 paper "Language Guided Concept Bottleneck Models for Interpretable Continual Learning"☆30Updated 7 months ago