EPFLiGHT / MultiModNLinks
MultiModN – Multimodal, Multi-Task, Interpretable Modular Networks (NeurIPS 2023)
☆33Updated last year
Alternatives and similar repositories for MultiModN
Users that are interested in MultiModN are comparing it to the libraries listed below
Sorting:
- ☆48Updated 6 months ago
- [NeurIPS 2023] Factorized Contrastive Learning: Going Beyond Multi-view Redundancy☆70Updated last year
- [ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models☆97Updated last year
- Dataset for Checking Consistency between Unstructured Notes and Structured Tables in Electronic Health Records☆23Updated last year
- [NeurIPS 2023, ICMI 2023] Quantifying & Modeling Multimodal Interactions☆78Updated 9 months ago
- A regression-alike loss to improve numerical reasoning in language models - ICML 2025☆24Updated last week
- Towards Understanding the Mixture-of-Experts Layer in Deep Learning☆31Updated last year
- [NeurIPS 2023] Official repository for "Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models"☆12Updated last year
- ☆27Updated 9 months ago
- Official Code for ICLR 2024 Paper: Non-negative Contrastive Learning☆45Updated last year
- I2M2: Jointly Modeling Inter- & Intra-Modality Dependencies for Multi-modal Learning (NeurIPS 2024)☆22Updated 9 months ago
- ☆17Updated last year
- Med-PRM: Medical Reasoning Models with Stepwise, Guideline-verified Process Rewards☆40Updated last month
- DiReCT: Diagnostic Reasoning for Clinical Notes via Large Language Models (NeurIPS 2024 D&B Track)☆21Updated 5 months ago
- Expert-level AI radiology report evaluator☆32Updated 4 months ago
- Repository of paper Consistency-preserving Visual Question Answering in Medical Imaging (MICCAI2022)☆23Updated 2 years ago
- Official Implementation of "Geometric Multimodal Contrastive Representation Learning" (https://arxiv.org/abs/2202.03390)☆28Updated 7 months ago
- Code for the paper "RECAP: Towards Precise Radiology Report Generation via Dynamic Disease Progression Reasoning" (EMNLP'23 Findings).☆27Updated 2 months ago
- [NeurIPS 2024 Spotlight] Code for the paper "Flex-MoE: Modeling Arbitrary Modality Combination via the Flexible Mixture-of-Experts"☆58Updated 2 months ago
- Multimodal Graph Learning: how to encode multiple multimodal neighbors with their relations into LLMs☆64Updated last year
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆57Updated 8 months ago
- Source codes of the paper "Hierarchical Pretraining on Multimodal Electronic Health Records".☆18Updated last year
- EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray Images, NeurIPS 2023 D&B☆84Updated last year
- [ACL 2024 Findings] This is the code for our paper "Knowledge-Infused Prompting: Assessing and Advancing Clinical Text Data Generation wi…☆39Updated last year
- [ICLR 2024 spotlight] Making Pre-trained Language Models Great on Tabular Prediction☆57Updated last year
- Code for the paper "Explain Any Concept: Segment Anything Meets Concept-Based Explanation". Poster @ NeurIPS 2023☆44Updated last year
- LLaVa Version of RaDialog☆21Updated 3 months ago
- [TMLR 2022] High-Modality Multimodal Transformer☆117Updated 9 months ago
- Symile is a flexible, architecture-agnostic contrastive loss that enables training modality-specific representations for any number of mo…☆38Updated 5 months ago
- Implementation of FuseMoE for FlexiModal Fusion, NeurIPS'24☆26Updated 5 months ago