rajesh-lab / symileLinks
Symile is a flexible, architecture-agnostic contrastive loss that enables training modality-specific representations for any number of modalities.
☆38Updated 6 months ago
Alternatives and similar repositories for symile
Users that are interested in symile are comparing it to the libraries listed below
Sorting:
- Expert-level AI radiology report evaluator☆34Updated 6 months ago
- [NeurIPS 2023, ICMI 2023] Quantifying & Modeling Multimodal Interactions☆79Updated 11 months ago
- [ICLR 2023] MultiViz: Towards Visualizing and Understanding Multimodal Models☆98Updated last year
- [NeurIPS 2023] Factorized Contrastive Learning: Going Beyond Multi-view Redundancy☆72Updated last year
- I2M2: Jointly Modeling Inter- & Intra-Modality Dependencies for Multi-modal Learning (NeurIPS 2024)☆22Updated 11 months ago
- [EMNLP 2025] Med-PRM: Medical Reasoning Models with Stepwise, Guideline-verified Process Rewards☆43Updated 3 weeks ago
- MedMax: Mixed-Modal Instruction Tuning for Training Biomedical Assistants☆37Updated 2 weeks ago
- [CVPR 2025] BIOMEDICA: An Open Biomedical Image-Caption Archive, Dataset, and Vision-Language Models Derived from Scientific Literature☆83Updated 6 months ago
- [CVPR 2025] MicroVQA eval and 🤖RefineBot code for "MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific Research"…☆24Updated 2 months ago
- ☆48Updated 2 months ago
- [CVPR 2025] CheXWorld: Exploring Image World Modeling for Radiograph Representation Learning☆26Updated 5 months ago
- A new collection of medical VQA dataset based on MIMIC-CXR. Part of the work 'EHRXQA: A Multi-Modal Question Answering Dataset for Electr…☆88Updated last year
- [NeurIPS'24] CARES: A Comprehensive Benchmark of Trustworthiness in Medical Vision Language Models☆75Updated 10 months ago
- MultiModN – Multimodal, Multi-Task, Interpretable Modular Networks (NeurIPS 2023)☆33Updated 2 years ago
- [ACL 2025 Findings] "Worse than Random? An Embarrassingly Simple Probing Evaluation of Large Multimodal Models in Medical VQA"☆22Updated 7 months ago
- Code for the paper "Explain Any Concept: Segment Anything Meets Concept-Based Explanation". Poster @ NeurIPS 2023☆44Updated last year
- The dataset and evaluation code for MediConfusion: Can you trust your AI radiologist? Probing the reliability of multimodal medical found…☆22Updated 7 months ago
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆227Updated last year
- ☆68Updated 3 months ago
- BiomedCLIP data pipeline☆78Updated 8 months ago
- Characterizing and overcoming the greedy nature of learning in multi-modal deep neural networks☆30Updated 3 years ago
- EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray Images, NeurIPS 2023 D&B☆85Updated last year
- [TMLR 2022] High-Modality Multimodal Transformer☆117Updated 11 months ago
- LLaVa Version of RaDialog☆23Updated 4 months ago
- The official code to build up dataset PMC-OA☆32Updated last year
- ☆40Updated 4 months ago
- [ICML'25] MMedPO: Aligning Medical Vision-Language Models with Clinical-Aware Multimodal Preference Optimization☆56Updated 4 months ago
- ☆39Updated last year
- ☆48Updated 7 months ago
- BenchX: A Unified Benchmark Framework for Medical Vision-Language Pretraining on Chest X-Rays☆38Updated 4 months ago