richard-peng-xia / CARES
[NeurIPS'24] CARES: A Comprehensive Benchmark of Trustworthiness in Medical Vision Language Models
☆68Updated 5 months ago
Alternatives and similar repositories for CARES:
Users that are interested in CARES are comparing it to the libraries listed below
- [EMNLP'24] RULE: Reliable Multimodal RAG for Factuality in Medical Vision Language Models☆79Updated 4 months ago
- [ICML'25] MMedPO: Aligning Medical Vision-Language Models with Clinical-Aware Multimodal Preference Optimization☆33Updated 2 months ago
- [CVPR 2024] FairCLIP: Harnessing Fairness in Vision-Language Learning☆74Updated last month
- The code for paper: PeFoM-Med: Parameter Efficient Fine-tuning on Multi-modal Large Language Models for Medical Visual Question Answering☆45Updated last week
- A new collection of medical VQA dataset based on MIMIC-CXR. Part of the work 'EHRXQA: A Multi-Modal Question Answering Dataset for Electr…☆84Updated 8 months ago
- ☆76Updated 11 months ago
- ☆64Updated 3 weeks ago
- Official repository of paper titled "UniMed-CLIP: Towards a Unified Image-Text Pretraining Paradigm for Diverse Medical Imaging Modalitie…☆105Updated last week
- Expert-level AI radiology report evaluator☆29Updated last month
- ☆30Updated last year
- [CVPR 2025] BIOMEDICA: An Open Biomedical Image-Caption Archive, Dataset, and Vision-Language Models Derived from Scientific Literature☆54Updated last month
- MedMax: Mixed-Modal Instruction Tuning for Training Biomedical Assistants☆32Updated 3 months ago
- Codes and Pre-trained models for RAMM: Retrieval-augmented Biomedical Visual Question Answering with Multi-modal Pre-training [ACM MM 202…☆29Updated last year
- OphNet: A Large-Scale Video Benchmark for Ophthalmic Surgical Workflow Understanding☆43Updated last month
- Official repository for the paper "Rad-ReStruct: A Novel VQA Benchmark and Method for Structured Radiology Reporting" (MICCAI23)☆27Updated last year
- EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray Images, NeurIPS 2023 D&B☆77Updated 9 months ago
- Official code for the CHIL 2024 paper: "Vision-Language Generative Model for View-Specific Chest X-ray Generation"☆49Updated 5 months ago
- [EMNLP, Findings 2024] a radiology report generation metric that leverages the natural language understanding of language models to ident…☆45Updated 2 months ago
- The official repository of paper named 'A Refer-and-Ground Multimodal Large Language Model for Biomedicine'☆23Updated 6 months ago
- [MICCAI'24 Early Accept] Generalizing to Unseen Domains in Diabetic Retinopathy with Disentangled Representations☆14Updated 10 months ago
- The dataset and evaluation code for MediConfusion: Can you trust your AI radiologist? Probing the reliability of multimodal medical found…☆16Updated 2 months ago
- [ICLR 2025] MedRegA: Interpretable Bilingual Multimodal Large Language Model for Diverse Biomedical Tasks☆31Updated 3 weeks ago
- [ICML 2025] MedXpertQA: Benchmarking Expert-Level Medical Reasoning and Understanding☆60Updated 2 months ago
- ☆62Updated 3 months ago
- This repository is made for the paper: Self-supervised vision-language pretraining for Medical visual question answering☆35Updated 2 years ago
- [EMNLP'24] MedAdapter: Efficient Test-Time Adaptation of Large Language Models Towards Medical Reasoning☆30Updated 4 months ago
- ☆42Updated 2 months ago
- A generalist foundation model for healthcare capable of handling diverse medical data modalities.☆67Updated last year
- [ICLR'25] MMed-RAG: Versatile Multimodal RAG System for Medical Vision Language Models☆165Updated 3 months ago
- ☆21Updated last year