donghao51 / Awesome-Multimodal-Adaptation
Advances in Multimodal Adaptation and Generalization: From Traditional Approaches to Foundation Models
☆85Updated this week
Alternatives and similar repositories for Awesome-Multimodal-Adaptation
Users that are interested in Awesome-Multimodal-Adaptation are comparing it to the libraries listed below
Sorting:
- [CVPR 2024] Open-Set Domain Adaptation for Semantic Segmentation☆41Updated 9 months ago
- Pytorch implementation of "Test-time Adaption against Multi-modal Reliability Bias".☆35Updated 4 months ago
- Code for CVPR2025 "MMRL: Multi-Modal Representation Learning for Vision-Language Models".☆33Updated last month
- ☆15Updated last year
- Official implementation of ResCLIP: Residual Attention for Training-free Dense Vision-language Inference☆33Updated 2 months ago
- [ICLR 2024] ViDA: Homeostatic Visual Domain Adapter for Continual Test Time Adaptation☆64Updated last year
- [ECCV 2024] Towards Multimodal Open-Set Domain Generalization and Adaptation through Self-supervision☆30Updated 3 months ago
- Generalized Out-of-Distribution Detection and Beyond in Vision Language Model Era: A Survey [Miyai+, arXiv2024]☆87Updated 3 months ago
- CVPR2024: Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models☆72Updated 10 months ago
- [CVPR 2024] TEA: Test-time Energy Adaptation☆64Updated last year
- Pytorch Implementation for CVPR 2024 paper: Learn to Rectify the Bias of CLIP for Unsupervised Semantic Segmentation☆42Updated 3 weeks ago
- A curated list of awesome out-of-distribution detection resources.☆41Updated this week
- [CVPR 2024] Leveraging Vision-Language Models for Improving Domain Generalization in Image Classification☆31Updated last year
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models☆47Updated 10 months ago
- Official code for ICCV 2023 paper, "Improving Zero-Shot Generalization for CLIP with Synthesized Prompts"☆101Updated last year
- [CVPR 2024] PriViLege: Pre-trained Vision and Language Transformers Are Few-Shot Incremental Learners☆50Updated 8 months ago
- [CVPR 2024] Offical implemention of the paper "DePT: Decoupled Prompt Tuning"☆100Updated 5 months ago
- [NeurIPS 2023] SimMMDG: A Simple and Effective Framework for Multi-modal Domain Generalization☆61Updated 3 months ago
- Adaptation of vision-language models (CLIP) to downstream tasks using local and global prompts.☆41Updated 7 months ago
- Domain Generalization through Distilling CLIP with Language Guidance☆29Updated last year
- This is the PyTorch Implementation of "AETTA: Label-Free Accuracy Estimation for Test-Time Adaptation (CVPR '24)" by Taeckyung Lee, Sorn …☆13Updated 10 months ago
- [ICLR 2024] Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models.☆78Updated 9 months ago
- ☆21Updated last year
- CVPR 2024 Paper: Semantically-Shifted Incremental Adapter-Tuning is A Continual ViTransformer☆23Updated last year
- ☆34Updated 6 months ago
- cliptrase☆36Updated 8 months ago
- [NeurIPS 2023] Meta-Adapter☆48Updated last year
- [AAAI 2024] SVDP: Exploring Sparse Visual Prompt for Domain Adaptive Dense Prediction☆26Updated last year
- [CVPR 2024] Official Repository for "Efficient Test-Time Adaptation of Vision-Language Models"☆90Updated 10 months ago
- This repository contains the implementation for the paper "Revisiting Few Shot Object Detection with Vision-Language Models"☆64Updated 3 weeks ago