richard-peng-xia / CARESLinks
[NeurIPS'24] CARES: A Comprehensive Benchmark of Trustworthiness in Medical Vision Language Models
☆74Updated 8 months ago
Alternatives and similar repositories for CARES
Users that are interested in CARES are comparing it to the libraries listed below
Sorting:
- [ICML'25] MMedPO: Aligning Medical Vision-Language Models with Clinical-Aware Multimodal Preference Optimization☆44Updated 2 months ago
- The code for paper: PeFoM-Med: Parameter Efficient Fine-tuning on Multi-modal Large Language Models for Medical Visual Question Answering☆53Updated last month
- [CVPR 2024] FairCLIP: Harnessing Fairness in Vision-Language Learning☆83Updated 3 weeks ago
- [EMNLP'24] RULE: Reliable Multimodal RAG for Factuality in Medical Vision Language Models☆86Updated 7 months ago
- A new collection of medical VQA dataset based on MIMIC-CXR. Part of the work 'EHRXQA: A Multi-Modal Question Answering Dataset for Electr…☆87Updated 11 months ago
- ☆66Updated last month
- ☆37Updated last year
- [CVPR 2025] BIOMEDICA: An Open Biomedical Image-Caption Archive, Dataset, and Vision-Language Models Derived from Scientific Literature☆75Updated 4 months ago
- ☆82Updated last year
- Code for the paper "ORGAN: Observation-Guided Radiology Report Generation via Tree Reasoning" (ACL'23).☆55Updated 10 months ago
- EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray Images, NeurIPS 2023 D&B☆84Updated last year
- [ICLR 2025] MedRegA: Interpretable Bilingual Multimodal Large Language Model for Diverse Biomedical Tasks☆37Updated 3 weeks ago
- Official repository of paper titled "UniMed-CLIP: Towards a Unified Image-Text Pretraining Paradigm for Diverse Medical Imaging Modalitie…☆120Updated 3 months ago
- [EMNLP'24] MedAdapter: Efficient Test-Time Adaptation of Large Language Models Towards Medical Reasoning☆33Updated 7 months ago
- ☆21Updated 2 months ago
- [EMNLP'24] Code and data for paper "Med-MoE: Mixture of Domain-Specific Experts for Lightweight Medical Vision-Language Models"☆132Updated last month
- The official GitHub repository of the AAAI-2024 paper "Bootstrapping Large Language Models for Radiology Report Generation".☆58Updated last year
- Official code for the CHIL 2024 paper: "Vision-Language Generative Model for View-Specific Chest X-ray Generation"☆52Updated 8 months ago
- This repository is made for the paper: Self-supervised vision-language pretraining for Medical visual question answering☆36Updated 2 years ago
- ☆32Updated 3 weeks ago
- The official repository of paper named 'A Refer-and-Ground Multimodal Large Language Model for Biomedicine'☆29Updated 9 months ago
- ☆37Updated 9 months ago
- [ACL 2025 Findings] "Worse than Random? An Embarrassingly Simple Probing Evaluation of Large Multimodal Models in Medical VQA"☆22Updated 5 months ago
- [EMNLP 2024] RaTEScore: A Metric for Radiology Report Generation☆51Updated 2 months ago
- OphNet: A Large-Scale Video Benchmark for Ophthalmic Surgical Workflow Understanding☆55Updated last month
- [ICCV-2023] Towards Unifying Medical Vision-and-Language Pre-training via Soft Prompts☆74Updated last year
- MedEvalKit: A Unified Medical Evaluation Framework☆119Updated last week
- GMAI-VL & GMAI-VL-5.5M: A Large Vision-Language Model and A Comprehensive Multimodal Dataset Towards General Medical AI.☆78Updated 2 months ago
- [EMNLP, Findings 2024] a radiology report generation metric that leverages the natural language understanding of language models to ident…☆55Updated 3 months ago
- Official repository for the paper "Rad-ReStruct: A Novel VQA Benchmark and Method for Structured Radiology Reporting" (MICCAI23)☆29Updated last year