Dragon-Wu / awesome-medical-report-generationLinks
A Survey on Medical Report Generation: From Deep Neural Networks to Large Language Models
☆27Updated last year
Alternatives and similar repositories for awesome-medical-report-generation
Users that are interested in awesome-medical-report-generation are comparing it to the libraries listed below
Sorting:
- The official code for "Quantifying the Reasoning Abilities of LLMs on Real-world Clinical Cases".☆26Updated last month
- [EMNLP'24] MedAdapter: Efficient Test-Time Adaptation of Large Language Models Towards Medical Reasoning☆33Updated 7 months ago
- [ICML 2025] MedXpertQA: Benchmarking Expert-Level Medical Reasoning and Understanding☆96Updated 3 weeks ago
- The code for paper: PeFoM-Med: Parameter Efficient Fine-tuning on Multi-modal Large Language Models for Medical Visual Question Answering☆53Updated last month
- MedEvalKit: A Unified Medical Evaluation Framework☆119Updated 2 weeks ago
- [EMNLP'24] RULE: Reliable Multimodal RAG for Factuality in Medical Vision Language Models☆86Updated 7 months ago
- [EMNLP 2024] RaTEScore: A Metric for Radiology Report Generation☆51Updated 2 months ago
- The first Chinese medical large vision-language model designed to integrate the analysis of textual and visual data☆61Updated last year
- Code for the paper "ORGAN: Observation-Guided Radiology Report Generation via Tree Reasoning" (ACL'23).☆55Updated 10 months ago
- [EMNLP'24] Code and data for paper "Med-MoE: Mixture of Domain-Specific Experts for Lightweight Medical Vision-Language Models"☆132Updated last month
- Code for the paper "RADAR: Enhancing Radiology Report Generation with Supplementary Knowledge Injection" (ACL'25).☆18Updated 2 weeks ago
- [NeurIPS'24] CARES: A Comprehensive Benchmark of Trustworthiness in Medical Vision Language Models☆75Updated 8 months ago
- AOR: Anatomical Ontology-Guided Reasoning for Medical Large Multimodal Model in Chest X-Ray Interpretation☆40Updated 3 months ago
- ☆67Updated 6 months ago
- [CVPR 2024] FairCLIP: Harnessing Fairness in Vision-Language Learning☆84Updated 3 weeks ago
- Codes and Pre-trained models for RAMM: Retrieval-augmented Biomedical Visual Question Answering with Multi-modal Pre-training [ACM MM 202…☆29Updated last year
- The collection of medical VLP papars☆19Updated last year
- The official GitHub repository of the survey paper "A Systematic Review of Deep Learning-based Research on Radiology Report Generation".☆89Updated 2 months ago
- A new collection of medical VQA dataset based on MIMIC-CXR. Part of the work 'EHRXQA: A Multi-Modal Question Answering Dataset for Electr…☆88Updated 11 months ago
- Code for the paper "RECAP: Towards Precise Radiology Report Generation via Dynamic Disease Progression Reasoning" (EMNLP'23 Findings).☆27Updated last month
- A Curated Benchmark Repository for Medical Vision-Language Models☆135Updated last month
- The official repository of the paper 'Towards a Multimodal Large Language Model with Pixel-Level Insight for Biomedicine'☆74Updated 7 months ago
- MC-CoT implementation code☆18Updated last month
- PMC-VQA is a large-scale medical visual question-answering dataset, which contains 227k VQA pairs of 149k images that cover various modal…☆212Updated 8 months ago
- [ICML'25] MMedPO: Aligning Medical Vision-Language Models with Clinical-Aware Multimodal Preference Optimization☆44Updated 2 months ago
- ☆32Updated 3 weeks ago
- GMAI-VL & GMAI-VL-5.5M: A Large Vision-Language Model and A Comprehensive Multimodal Dataset Towards General Medical AI.☆79Updated 2 months ago
- [ICLR 2025] MedRegA: Interpretable Bilingual Multimodal Large Language Model for Diverse Biomedical Tasks☆38Updated last month
- This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-vi…☆110Updated last month
- The official codes for "Can Modern LLMs Act as Agent Cores in Radiology Environments?"☆26Updated 6 months ago