donghao51 / Awesome-Multimodal-AdaptationLinks
Advances in Multimodal Adaptation and Generalization: From Traditional Approaches to Foundation Models
☆116Updated this week
Alternatives and similar repositories for Awesome-Multimodal-Adaptation
Users that are interested in Awesome-Multimodal-Adaptation are comparing it to the libraries listed below
Sorting:
- [CVPR 2025] Official PyTorch Code for "MMRL: Multi-Modal Representation Learning for Vision-Language Models" and its extension "MMRL++: P…☆69Updated 2 months ago
- [CVPR 2024] Open-Set Domain Adaptation for Semantic Segmentation☆46Updated last year
- Official implementation of ResCLIP: Residual Attention for Training-free Dense Vision-language Inference☆43Updated 5 months ago
- [ICLR 2024] ViDA: Homeostatic Visual Domain Adapter for Continual Test Time Adaptation☆68Updated last year
- Collection of Unsupervised Learning Methods for Vision-Language Models (VLMs)☆36Updated this week
- Pytorch Implementation for CVPR 2024 paper: Learn to Rectify the Bias of CLIP for Unsupervised Semantic Segmentation☆50Updated this week
- CVPR2024: Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models☆81Updated last year
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆78Updated 4 months ago
- [CVPR 2024] Offical implemention of the paper "DePT: Decoupled Prompt Tuning"☆107Updated 3 months ago
- Pytorch implementation of "Test-time Adaption against Multi-modal Reliability Bias".☆37Updated 8 months ago
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models☆53Updated last year
- [ICLR'24] Consistency-guided Prompt Learning for Vision-Language Models☆80Updated last year
- The official code for "TextRefiner: Internal Visual Feature as Efficient Refiner for Vision-Language Models Prompt Tuning" | [AAAI2025]☆44Updated 5 months ago
- ☆50Updated last year
- [CVPR 2024] PriViLege: Pre-trained Vision and Language Transformers Are Few-Shot Incremental Learners☆51Updated 11 months ago
- [CVPR 2024] Official Repository for "Efficient Test-Time Adaptation of Vision-Language Models"☆102Updated last year
- ☆19Updated 3 months ago
- [CVPR 2024] Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification☆33Updated last year
- ☆15Updated last year
- Continual Forgetting for Pre-trained Vision Models (CVPR 2024)☆67Updated last month
- CLIP-Mamba: CLIP Pretrained Mamba Models with OOD and Hessian Evaluation☆75Updated last year
- Official repository of ”Mamba-FSCIL: Dynamic Adaptation with Selective State Space Model for Few-Shot Class-Incremental Learning"☆34Updated 3 weeks ago
- (CVPR2024 Highlight) Novel Class Discovery for Ultra-Fine-Grained Visual Categorization (UFG-NCD)☆21Updated last year
- This repository is a collection of awesome things about vision prompts, including papers, code, etc.☆36Updated last year
- [CVPR 2024] TEA: Test-time Energy Adaptation☆68Updated last year
- Code for paper "Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters" CVPR2024☆233Updated 9 months ago
- ☆45Updated 6 months ago
- [ECCV 2024] Towards Multimodal Open-Set Domain Generalization and Adaptation through Self-supervision☆36Updated 3 months ago
- [ICCV 2025] Official PyTorch Code for "Advancing Textual Prompt Learning with Anchored Attributes"☆87Updated last week
- [ECCV 2024] Soft Prompt Generation for Domain Generalization☆26Updated 10 months ago