EPFLiGHT / MultiModNLinks
MultiModN – Multimodal, Multi-Task, Interpretable Modular Networks (NeurIPS 2023)
☆35Updated 2 years ago
Alternatives and similar repositories for MultiModN
Users that are interested in MultiModN are comparing it to the libraries listed below
Sorting:
- Towards Understanding the Mixture-of-Experts Layer in Deep Learning☆34Updated 2 years ago
- Source codes of the paper "Hierarchical Pretraining on Multimodal Electronic Health Records".☆18Updated last year
- [NeurIPS 2023] Factorized Contrastive Learning: Going Beyond Multi-view Redundancy☆74Updated 2 years ago
- Code for the paper "ClinicalBench: Can LLMs Beat Traditional ML Models in Clinical Prediction?"☆31Updated 6 months ago
- An offical implementation of EHRDiff [TMLR]☆29Updated last year
- [ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models☆98Updated last year
- [NeurIPS 2024 Spotlight] Code for the paper "Flex-MoE: Modeling Arbitrary Modality Combination via the Flexible Mixture-of-Experts"☆69Updated 6 months ago
- [NeurIPS 2024] RaVL: Discovering and Mitigating Spurious Correlations in Fine-Tuned Vision-Language Models☆31Updated last year
- Dataset for Checking Consistency between Unstructured Notes and Structured Tables in Electronic Health Records☆24Updated last year
- [NeurIPS 2023, ICMI 2023] Quantifying & Modeling Multimodal Interactions☆84Updated last year
- ☆48Updated 10 months ago
- Medical multi-modal learning with missing modality data (MLHC 2023)☆14Updated 2 years ago
- [NeurIPS 2023] Official repository for "Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models"☆12Updated last year
- State Space Models☆71Updated last year
- [ICCVW'23] Robust Asymmetric Loss for Multi-Label Long-Tailed Learning☆18Updated 2 years ago
- Symile is a flexible, architecture-agnostic contrastive loss that enables training modality-specific representations for any number of mo…☆46Updated 9 months ago
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆61Updated last year
- [ML4H'25] m1: Unleash the Potential of Test-Time Scaling for Medical Reasoning in Large Language Models☆47Updated last week
- [ICLR 2024 spotlight] Making Pre-trained Language Models Great on Tabular Prediction☆65Updated last year
- Official Code for ICLR 2024 Paper: Non-negative Contrastive Learning☆46Updated last year
- Implementation of FuseMoE for FlexiModal Fusion, NeurIPS'24☆30Updated 2 months ago
- ☆21Updated 2 years ago
- Repository of paper Consistency-preserving Visual Question Answering in Medical Imaging (MICCAI2022)☆25Updated 2 years ago
- [ACL 2024 Findings] This is the code for our paper "Knowledge-Infused Prompting: Assessing and Advancing Clinical Text Data Generation wi…☆40Updated last year
- C-Mixup for NeurIPS 2022☆73Updated 2 years ago
- Characterizing and overcoming the greedy nature of learning in multi-modal deep neural networks☆30Updated 3 years ago
- The code repository for ICML24 paper "Tabular Insights, Visual Impacts: Transferring Expertise from Tables to Images"☆22Updated 9 months ago
- Official Implementation of "Geometric Multimodal Contrastive Representation Learning" (https://arxiv.org/abs/2202.03390)☆28Updated 11 months ago
- Multimodal Graph Learning: how to encode multiple multimodal neighbors with their relations into LLMs☆67Updated last year
- KDD 2024 | FlexCare: Leveraging Cross-Task Synergy for Flexible Multimodal Healthcare Prediction☆17Updated last year