UNITES-Lab / Flex-MoELinks
[NeurIPS 2024 Spotlight] Code for the paper "Flex-MoE: Modeling Arbitrary Modality Combination via the Flexible Mixture-of-Experts"
☆64Updated 3 months ago
Alternatives and similar repositories for Flex-MoE
Users that are interested in Flex-MoE are comparing it to the libraries listed below
Sorting:
- MedAgentBoard: Benchmarking Multi-Agent Collaboration with Conventional Methods for Diverse Medical Tasks☆23Updated 4 months ago
- The code repository for ICML24 paper "Tabular Insights, Visual Impacts: Transferring Expertise from Tables to Images"☆21Updated 6 months ago
- KDD 2024 | FlexCare: Leveraging Cross-Task Synergy for Flexible Multimodal Healthcare Prediction☆17Updated last year
- ☆50Updated 9 months ago
- [ICLR 2024 Spotlight] "Negative Label Guided OOD Detection with Pretrained Vision-Language Models"☆20Updated 11 months ago
- DrFuse: Learning Disentangled Representation for Clinical Multi-Modal Fusion with Missing Modality and Modal Inconsistency (AAAI24)☆57Updated last year
- Reliable Conflictive Multi-view Learning☆86Updated last year
- [NeurIPS 2023, ICMI 2023] Quantifying & Modeling Multimodal Interactions☆79Updated 11 months ago
- Official PyTorch Implementation of RA-TTA (ICLR25)☆15Updated 5 months ago
- ☆26Updated 3 years ago
- The repo for "Enhancing Multi-modal Cooperation via Sample-level Modality Valuation", CVPR 2024☆55Updated 11 months ago
- [CVPR 2024] FairCLIP: Harnessing Fairness in Vision-Language Learning☆89Updated 2 months ago
- offical implementation of "Calibrating Multimodal Learning" on ICML 2023☆20Updated 2 years ago
- Code for the paper Visual Explanations of Image–Text Representations via Multi-Modal Information Bottleneck Attribution☆58Updated last year
- Implementation of FuseMoE for FlexiModal Fusion, NeurIPS'24☆27Updated 6 months ago
- CVPR 2023: Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification☆100Updated last year
- ICLR'24 | Multimodal Patient Representation Learning with Missing Modalities and Labels☆41Updated 6 months ago
- [ICML 2025] I2MoE: Interpretable Multimodal Interaction-aware Mixture-of-Experts.☆34Updated 4 months ago
- Official code for ICLR 2023 paper "ContraNorm: A Contrastive Learning Perspective on Oversmoothing and Beyond "☆35Updated 2 years ago
- ☆48Updated 2 months ago
- ☆26Updated last month
- [ICML'25] MMedPO: Aligning Medical Vision-Language Models with Clinical-Aware Multimodal Preference Optimization☆56Updated 4 months ago
- ☆21Updated 11 months ago
- ☆25Updated 10 months ago
- This is the official code for the paper "Reconstruct before Query: Continual Missing Modality Learning with Decomposed Prompt Collaborati…☆11Updated last year
- ☆14Updated 2 years ago
- Source codes of the paper "Hierarchical Pretraining on Multimodal Electronic Health Records".☆18Updated last year
- The official implementation for MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning (CVPR '24)☆65Updated 3 months ago
- Twin Contrastive Learning with Noisy Labels (CVPR 2023)☆70Updated 2 years ago
- This is the official code for NeurIPS 2023 paper "Learning Unseen Modality Interaction"☆16Updated last year