xiaobai1217 / Unseen-Modality-InteractionLinks
This is the official code for NeurIPS 2023 paper "Learning Unseen Modality Interaction"
☆16Updated last year
Alternatives and similar repositories for Unseen-Modality-Interaction
Users that are interested in Unseen-Modality-Interaction are comparing it to the libraries listed below
Sorting:
- The repo for "MMPareto: Boosting Multimodal Learning with Innocent Unimodal Assistance", ICML 2024☆47Updated last year
- Pytorch implementation of "Test-time Adaption against Multi-modal Reliability Bias".☆42Updated 10 months ago
- The repo for "Enhancing Multi-modal Cooperation via Sample-level Modality Valuation", CVPR 2024☆57Updated 11 months ago
- Code for the paper Visual Explanations of Image–Text Representations via Multi-Modal Information Bottleneck Attribution☆60Updated last year
- Multimodal Prompting with Missing Modalities for Visual Recognition, CVPR'23☆218Updated last year
- Code for dmrnet☆28Updated 3 months ago
- PMR: Prototypical Modal Rebalance for Multimodal Learning☆42Updated 2 years ago
- The official implementation of CMAE https://arxiv.org/abs/2207.13532 and https://ieeexplore.ieee.org/document/10330745☆110Updated last year
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆170Updated last year
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)☆73Updated 8 months ago
- [CVPR 2024] Official Repository for "Efficient Test-Time Adaptation of Vision-Language Models"☆108Updated last year
- [ICLR 23 oral] The Modality Focusing Hypothesis: Towards Understanding Crossmodal Knowledge Distillation☆46Updated 2 years ago
- ☆27Updated 2 years ago
- The official implementation of 'Align and Attend: Multimodal Summarization with Dual Contrastive Losses' (CVPR 2023)☆78Updated 2 years ago
- ☆52Updated 10 months ago
- Adaptation of vision-language models (CLIP) to downstream tasks using local and global prompts.☆49Updated 3 months ago
- offical implementation of "Calibrating Multimodal Learning" on ICML 2023☆20Updated 2 years ago
- [COLING'25] HGCLIP: Exploring Vision-Language Models with Graph Representations for Hierarchical Understanding☆43Updated 11 months ago
- [ICLR 2024] Test-Time RL with CLIP Feedback for Vision-Language Models.☆94Updated last week
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆83Updated 6 months ago
- PyTorch Implementation for InMaP☆10Updated 2 years ago
- [CVPR 2024] TEA: Test-time Energy Adaptation☆71Updated last year
- ☆38Updated last year
- An official implementation of "Distribution-Consistent Modal Recovering for Incomplete Multimodal Learning" in PyTorch. (ICCV 2023)☆31Updated 2 years ago
- Pytorch Implementation for CVPR 2024 paper: Learn to Rectify the Bias of CLIP for Unsupervised Semantic Segmentation☆54Updated 2 months ago
- Codebase for "Multimodal Distillation for Egocentric Action Recognition" (ICCV 2023)☆31Updated last year
- Semi-Supervised Domain Adaptation with Source Label Adaptation, accepted to CVPR 2023☆43Updated last year
- Twin Contrastive Learning with Noisy Labels (CVPR 2023)☆71Updated 2 years ago
- ☆62Updated 2 years ago
- [CVPR 2024] FairCLIP: Harnessing Fairness in Vision-Language Learning☆91Updated 3 months ago