rajesh-lab / symileLinks
Symile is a flexible, architecture-agnostic contrastive loss that enables training modality-specific representations for any number of modalities.
☆41Updated 7 months ago
Alternatives and similar repositories for symile
Users that are interested in symile are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2023, ICMI 2023] Quantifying & Modeling Multimodal Interactions☆82Updated 11 months ago
- [CVPR 2025] CheXWorld: Exploring Image World Modeling for Radiograph Representation Learning☆27Updated 6 months ago
- A new collection of medical VQA dataset based on MIMIC-CXR. Part of the work 'EHRXQA: A Multi-Modal Question Answering Dataset for Electr…☆88Updated last year
- [ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models☆97Updated last year
- I2M2: Jointly Modeling Inter- & Intra-Modality Dependencies for Multi-modal Learning (NeurIPS 2024)☆22Updated 11 months ago
- Expert-level AI radiology report evaluator☆34Updated 6 months ago
- BiomedCLIP data pipeline☆85Updated 9 months ago
- [CVPR 2025] BIOMEDICA: An Open Biomedical Image-Caption Archive, Dataset, and Vision-Language Models Derived from Scientific Literature