Raina-Xin / I2MoELinks
[ICML 2025] I2MoE: Interpretable Multimodal Interaction-aware Mixture-of-Experts.
☆58Updated 8 months ago
Alternatives and similar repositories for I2MoE
Users that are interested in I2MoE are comparing it to the libraries listed below
Sorting:
- DrFuse: Learning Disentangled Representation for Clinical Multi-Modal Fusion with Missing Modality and Modal Inconsistency (AAAI24)☆60Updated last year
- [ICML 2023] Provable Dynamic Fusion for Low-Quality Multimodal Data☆116Updated 7 months ago
- [ICML 2024] Official implementation for "Predictive Dynamic Fusion."☆70Updated last year
- [NeurIPS 2024 Spotlight] Code for the paper "Flex-MoE: Modeling Arbitrary Modality Combination via the Flexible Mixture-of-Experts"☆70Updated 7 months ago
- ☆54Updated last year
- The repo for "Enhancing Multi-modal Cooperation via Sample-level Modality Valuation", CVPR 2024☆59Updated last year
- Code for "Leveraging Knowledge of Modality Experts for Incomplete Multimodal Learning" accepted by ACM Multimedia 2024☆43Updated last year
- Code for the paper 'Dynamic Multimodal Fusion'