snap-stanford / med-flamingoView external linksLinks
☆443Aug 23, 2023Updated 2 years ago
Alternatives and similar repositories for med-flamingo
Users that are interested in med-flamingo are comparing it to the libraries listed below
Sorting:
- PMC-VQA is a large-scale medical visual question-answering dataset, which contains 227k VQA pairs of 149k images that cover various modal…☆225Dec 6, 2024Updated last year
- Large Language-and-Vision Assistant for Biomedicine, built towards multimodal GPT-4 level capabilities.☆2,134Jun 4, 2025Updated 8 months ago
- The official code for "Towards Generalist Foundation Model for Radiology by Leveraging Web-scale 2D&3D Medical Data".☆524Jul 25, 2025Updated 6 months ago
- BiomedGPT: A Generalist Vision-Language Foundation Model for Diverse Biomedical Tasks☆702Jul 8, 2025Updated 7 months ago
- The official codes for "PMC-CLIP: Contrastive Language-Image Pre-training using Biomedical Documents"☆233Aug 30, 2024Updated last year
- The official code for MedKLIP: Medical Knowledge Enhanced Language-Image Pre-Training in Radiology. We propose to leverage medical specif…☆178Sep 4, 2023Updated 2 years ago
- Visual Med-Alpaca is an open-source, multi-modal foundation model designed specifically for the biomedical domain, built on the LLaMa-7B.…☆393Mar 11, 2024Updated last year
- [Arxiv-2024] CheXagent: Towards a Foundation Model for Chest X-Ray Interpretation☆213Jan 7, 2025Updated last year
- EHRXQA: A Multi-Modal Question Answering Dataset for Electronic Health Records with Chest X-ray Images (NeurIPS 2023 D&B)☆91Feb 6, 2026Updated last week
- [BIONLP@ACL 2024] XrayGPT: Chest Radiographs Summarization using Medical Vision-Language Models.☆526Aug 8, 2024Updated last year
- [CHIL 2024] ViewXGen: Vision-Language Generative Model for View-Specific Chest X-ray Generation☆55Dec 4, 2024Updated last year
- Radiology Objects in COntext (ROCO): A Multimodal Image Dataset☆239Apr 5, 2022Updated 3 years ago
- EMNLP'22 | MedCLIP: Contrastive Learning from Unpaired Medical Images and Texts☆664Apr 12, 2024Updated last year
- [ACL 2025 Findings] "Worse than Random? An Embarrassingly Simple Probing Evaluation of Large Multimodal Models in Medical VQA"☆25Feb 21, 2025Updated 11 months ago
- Path to Medical AGI: Unify Domain-specific Medical LLMs with the Lowest Cost☆39Jun 21, 2023Updated 2 years ago
- ☆68Apr 23, 2024Updated last year
- The official code to build up dataset PMC-OA☆34Jul 16, 2024Updated last year
- [ICLR 2025] This is the official repository of our paper "MedTrinity-25M: A Large-scale Multimodal Dataset with Multigranular Annotations…☆399Jul 11, 2025Updated 7 months ago
- An open-source framework for training large multimodal models.☆4,066Aug 31, 2024Updated last year
- A collection of resources on applications of multi-modal learning in medical imaging.☆913Feb 8, 2026Updated last week
- ☆35Nov 22, 2022Updated 3 years ago
- ☆39Nov 10, 2023Updated 2 years ago
- Official code for "LLM-CXR: Instruction-Finetuned LLM for CXR Image Understanding and Generation"☆144Nov 11, 2023Updated 2 years ago
- The official GitHub repository of the AAAI-2024 paper "Bootstrapping Large Language Models for Radiology Report Generation".☆65Apr 23, 2024Updated last year
- [NeurIPS'22] Multi-Granularity Cross-modal Alignment for Generalized Medical Visual Representation Learning☆178May 16, 2024Updated last year
- ViLMedic (Vision-and-Language medical research) is a modular framework for vision and language multimodal research in the medical field☆187Oct 9, 2025Updated 4 months ago
- The official codes for "PMC-LLaMA: Towards Building Open-source Language Models for Medicine"☆675Jul 8, 2024Updated last year
- GLoRIA: A Multimodal Global-Local Representation Learning Framework forLabel-efficient Medical Image Recognition☆234Feb 6, 2023Updated 3 years ago
- Official code for the Paper "RaDialog: A Large Vision-Language Model for Radiology Report Generation and Conversational Assistance"☆113Jun 4, 2025Updated 8 months ago
- A generalist foundation model for healthcare capable of handling diverse medical data modalities.☆92Apr 25, 2024Updated last year
- Code for the CVPR paper "Interactive and Explainable Region-guided Radiology Report Generation"☆206Jun 23, 2024Updated last year
- A new collection of medical VQA dataset based on MIMIC-CXR. Part of the work 'EHRXQA: A Multi-Modal Question Answering Dataset for Electr…☆96Feb 6, 2026Updated last week
- Meditron is a suite of open-source medical Large Language Models (LLMs).☆2,142Apr 10, 2024Updated last year
- Dataset of medical images, captions, subfigure-subcaption annotations, and inline textual references☆166Aug 21, 2025Updated 5 months ago
- MCPL: Multi-modal Collaborative Prompt Learning for Medical Vision-Language Model (Initial Version)☆13Apr 17, 2024Updated last year
- ☆46Apr 25, 2024Updated last year
- Multi-Aspect Vision Language Pretraining - CVPR2024☆87Aug 20, 2024Updated last year
- CheXpert NLP tool to extract observations from radiology reports.☆396Feb 3, 2023Updated 3 years ago
- A Python tool to evaluate the performance of VLM on the medical domain.☆83Aug 5, 2025Updated 6 months ago