EPFLiGHT / MultiModNLinks
MultiModN – Multimodal, Multi-Task, Interpretable Modular Networks (NeurIPS 2023)
☆33Updated last year
Alternatives and similar repositories for MultiModN
Users that are interested in MultiModN are comparing it to the libraries listed below
Sorting:
- Official Code for ICLR 2024 Paper: Non-negative Contrastive Learning☆45Updated last year
- [ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models☆97Updated last year
- [NeurIPS 2023] Factorized Contrastive Learning: Going Beyond Multi-view Redundancy☆70Updated last year
- [NeurIPS 2023, ICMI 2023] Quantifying & Modeling Multimodal Interactions☆79Updated 10 months ago
- Towards Understanding the Mixture-of-Experts Layer in Deep Learning☆31Updated last year
- Dataset for Checking Consistency between Unstructured Notes and Structured Tables in Electronic Health Records☆23Updated last year
- ☆48Updated 6 months ago
- I2M2: Jointly Modeling Inter- & Intra-Modality Dependencies for Multi-modal Learning (NeurIPS 2024)☆22Updated 10 months ago
- Code for the paper "Explain Any Concept: Segment Anything Meets Concept-Based Explanation". Poster @ NeurIPS 2023☆44Updated last year
- A regression-alike loss to improve numerical reasoning in language models - ICML 2025☆25Updated 3 weeks ago
- State Space Models☆70Updated last year
- [ICLR 2024 spotlight] Making Pre-trained Language Models Great on Tabular Prediction☆57Updated last year
- [NeurIPS 2024 Spotlight] Code for the paper "Flex-MoE: Modeling Arbitrary Modality Combination via the Flexible Mixture-of-Experts"☆61Updated 3 months ago
- Official Implementation of "Geometric Multimodal Contrastive Representation Learning" (https://arxiv.org/abs/2202.03390)☆28Updated 8 months ago
- Official implementation of Vector-ICL: In-context Learning with Continuous Vector Representations (ICLR 2025)☆20Updated 3 months ago
- [NeurIPS 2024] RaVL: Discovering and Mitigating Spurious Correlations in Fine-Tuned Vision-Language Models☆26Updated 10 months ago
- Medical multi-modal learning with missing modality data (MLHC 2023)☆13Updated 2 years ago
- [TMLR 2022] High-Modality Multimodal Transformer☆117Updated 10 months ago
- Symile is a flexible, architecture-agnostic contrastive loss that enables training modality-specific representations for any number of mo…☆39Updated 5 months ago
- [NeurIPS 2023] Official repository for "Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models"☆12Updated last year
- An offical implementation of EHRDiff [TMLR]☆27Updated last year
- ☆17Updated last year
- The code repository for ICML24 paper "Tabular Insights, Visual Impacts: Transferring Expertise from Tables to Images"☆20Updated 6 months ago
- C-Mixup for NeurIPS 2022☆73Updated last year
- MedTsLLM: Leveraging LLMs for Multimodal Medical Time Series Analysis☆46Updated 3 months ago
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆58Updated 9 months ago
- LISA for ICML 2022☆51Updated 2 years ago
- (ICML 2023) Discover and Cure: Concept-aware Mitigation of Spurious Correlation☆41Updated last year
- [ICML 2023] Official repository of paper: Dividing and Conquering a BlackBox to a Mixture of Interpretable Models: Route, Interpret, Repe…☆25Updated last month
- Active Learning Helps Pretrained Models Learn the Intended Task (https://arxiv.org/abs/2204.08491) by Alex Tamkin, Dat Nguyen, Salil Desh…☆11Updated 2 years ago