xiaoman-zhang / PMC-VQALinks
PMC-VQA is a large-scale medical visual question-answering dataset, which contains 227k VQA pairs of 149k images that cover various modalities or diseases.
☆225Updated last year
Alternatives and similar repositories for PMC-VQA
Users that are interested in PMC-VQA are comparing it to the libraries listed below
Sorting:
- The official code for MedKLIP: Medical Knowledge Enhanced Language-Image Pre-Training in Radiology. We propose to leverage medical specif…☆178Updated 2 years ago
- ☆68Updated 11 months ago
- ViLMedic (Vision-and-Language medical research) is a modular framework for vision and language multimodal research in the medical field☆187Updated 3 months ago
- ☆98Updated last year
- ☆154Updated last year
- This repository is made for the paper: Self-supervised vision-language pretraining for Medical visual question answering☆42Updated 2 years ago
- A new collection of medical VQA dataset based on MIMIC-CXR. Part of the work 'EHRXQA: A Multi-Modal Question Answering Dataset for Electr…☆94Updated last year
- Code for the CVPR paper "Interactive and Explainable Region-guided Radiology Report Generation"☆202Updated last year
- ☆67Updated last year
- A generalist foundation model for healthcare capable of handling diverse medical data modalities.☆91Updated last year
- The first Chinese medical large vision-language model designed to integrate the analysis of textual and visual data☆64Updated 2 years ago
- The code for paper: PeFoMed: Parameter Efficient Fine-tuning on Multi-modal Large Language Models for Medical Visual Question Answering☆57Updated 3 weeks ago
- ☆39Updated 2 years ago
- The official code to build up dataset PMC-OA☆34Updated last year
- The official GitHub repository of the AAAI-2024 paper "Bootstrapping Large Language Models for Radiology Report Generation".☆64Updated last year
- Code implementation of RP3D-Diag☆77Updated 4 months ago
- Radiology Report Generation with Frozen LLMs☆110Updated last year
- A Python tool to evaluate the performance of VLM on the medical domain.☆83Updated 5 months ago
- Open-sourced code of miniGPT-Med☆138Updated last year
- MedEvalKit: A Unified Medical Evaluation Framework☆200Updated 2 months ago
- This repository is made for the paper: Masked Vision and Language Pre-training with Unimodal and Multimodal Contrastive Losses for Medica…☆48Updated last year
- Repository for the paper: Open-Ended Medical Visual Question Answering Through Prefix Tuning of Language Models (https://arxiv.org/abs/23…☆19Updated 2 years ago
- Dataset of medical images, captions, subfigure-subcaption annotations, and inline textual references☆166Updated 4 months ago
- A multi-modal CLIP model trained on the medical dataset ROCO☆148Updated 7 months ago
- [ICLR 2025] MedRegA: Interpretable Bilingual Multimodal Large Language Model for Diverse Biomedical Tasks☆44Updated 2 months ago
- The official code for "Towards Generalist Foundation Model for Radiology by Leveraging Web-scale 2D&3D Medical Data".☆518Updated 5 months ago
- Radiology Objects in COntext (ROCO): A Multimodal Image Dataset☆235Updated 3 years ago
- ☆35Updated last week
- ☆43Updated 2 years ago
- [NeurIPS'22] Multi-Granularity Cross-modal Alignment for Generalized Medical Visual Representation Learning☆177Updated last year