richard-peng-xia / CARES
[NeurIPS'24 & ICMLW'24] CARES: A Comprehensive Benchmark of Trustworthiness in Medical Vision Language Models
☆56Updated last month
Related projects ⓘ
Alternatives and complementary repositories for CARES
- [EMNLP'24] RULE: Reliable Multimodal RAG for Factuality in Medical Vision Language Models☆40Updated 3 weeks ago
- ☆22Updated 6 months ago
- [EMNLP'24] Code and data for paper "Med-MoE: Mixture of Domain-Specific Experts for Lightweight Medical Vision-Language Models"☆63Updated last month
- The code for paper: PeFoM-Med: Parameter Efficient Fine-tuning on Multi-modal Large Language Models for Medical Visual Question Answering☆34Updated 2 weeks ago
- ☆64Updated 5 months ago
- ☆58Updated 2 months ago
- A new collection of medical VQA dataset based on MIMIC-CXR. Part of the work 'EHRXQA: A Multi-Modal Question Answering Dataset for Electr…☆69Updated 2 months ago
- This repository is made for the paper: Self-supervised vision-language pretraining for Medical visual question answering☆33Updated last year
- Official code for the CHIL 2024 paper: "Vision-Language Generative Model for View-Specific Chest X-ray Generation"☆44Updated 6 months ago
- Code for the paper "ORGAN: Observation-Guided Radiology Report Generation via Tree Reasoning" (ACL'23).☆49Updated last month
- Codes and Pre-trained models for RAMM: Retrieval-augmented Biomedical Visual Question Answering with Multi-modal Pre-training [ACM MM 202…☆25Updated last year
- [CVPR 2024] FairCLIP: Harnessing Fairness in Vision-Language Learning☆51Updated 3 months ago
- OphNet: A Large-Scale Video Benchmark for Ophthalmic Surgical Workflow Understanding☆29Updated last week
- ☆19Updated 6 months ago
- Official code for the Paper "RaDialog: A Large Vision-Language Model for Radiology Report Generation and Conversational Assistance"☆66Updated 3 months ago