rajesh-lab / symileLinks
Symile is a flexible, architecture-agnostic contrastive loss that enables training modality-specific representations for any number of modalities.
☆45Updated 8 months ago
Alternatives and similar repositories for symile
Users that are interested in symile are comparing it to the libraries listed below
Sorting:
- [CVPR 2025] BIOMEDICA: An Open Biomedical Image-Caption Archive, Dataset, and Vision-Language Models Derived from Scientific Literature☆86Updated 8 months ago
- [CVPR 2025] CheXWorld: Exploring Image World Modeling for Radiograph Representation Learning☆31Updated 7 months ago
- BiomedCLIP data pipeline☆93Updated 10 months ago
- [NeurIPS 2023, ICMI 2023] Quantifying & Modeling Multimodal Interactions☆84Updated last year
- [EMNLP 2025] Med-PRM: Medical Reasoning Models with Stepwise, Guideline-verified Process Rewards☆52Updated 2 months ago
- Expert-level AI radiology report evaluator☆35Updated 8 months ago
- A new collection of medical VQA dataset based on MIMIC-CXR. Part of the work 'EHRXQA: A Multi-Modal Question Answering Dataset for Electr…☆92Updated last year
- [ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models☆98Updated last year
- ☆43Updated 6 months ago
- ☆57Updated 4 months ago
- [ICML'25] MMedPO: Aligning Medical Vision-Language Models with Clinical-Aware Multimodal Preference Optimization☆63Updated 6 months ago
- [CVPR 2025] MicroVQA eval and 🤖RefineBot code for "MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific Research"…☆29Updated last week
- [ACL 2025 Findings] "Worse than Random? An Embarrassingly Simple Probing Evaluation of Large Multimodal Models in Medical VQA"☆24Updated 9 months ago
- ☆31Updated last year
- ☆96Updated last year
- ☆44Updated last year
- BenchX: A Unified Benchmark Framework for Medical Vision-Language Pretraining on Chest X-Rays☆41Updated 6 months ago
- [npjDigitalMed (Nature Portfolio)] EVA-X: A foundation model for general chest X-ray analysis with self-supervised learning☆88Updated this week
- INSPECT dataset/benchmark paper, accepted by NeurIPS 2023☆41Updated 6 months ago
- I2M2: Jointly Modeling Inter- & Intra-Modality Dependencies for Multi-modal Learning (NeurIPS 2024)☆22Updated last year
- A survey on data-centric foundation models in healthcare.☆77Updated 9 months ago
- [MedIA 2025] - Official repo for the paper: "Scaling up self-supervised learning for improved surgical foundation models"☆42Updated last week
- Code for CheXlocalize☆37Updated last year
- LLaVa Version of RaDialog☆24Updated 6 months ago
- [NeurIPS'24] CARES: A Comprehensive Benchmark of Trustworthiness in Medical Vision Language Models☆77Updated last year
- The dataset and evaluation code for MediConfusion: Can you trust your AI radiologist? Probing the reliability of multimodal medical found…☆23Updated 2 weeks ago
- [NeurIPS 2023 Oral] Quilt-1M: One Million Image-Text Pairs for Histopathology.☆173Updated last year
- ☆24Updated 2 weeks ago
- EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray Images, NeurIPS 2023 D&B☆87Updated last year
- [Arxiv-2024] CheXagent: Towards a Foundation Model for Chest X-Ray Interpretation☆206Updated 11 months ago