FreedomIntelligence / HuatuoGPT-VisionLinks
Medical Multimodal LLMs
☆300Updated last month
Alternatives and similar repositories for HuatuoGPT-Vision
Users that are interested in HuatuoGPT-Vision are comparing it to the libraries listed below
Sorting:
- The official codes for "PMC-CLIP: Contrastive Language-Image Pre-training using Biomedical Documents"☆210Updated 9 months ago
- An interpretable large language model (LLM) for medical diagnosis.☆135Updated 8 months ago
- 中文医学多模态大模型 Large Chinese Language-and-Vision Assistant for BioMedicine☆85Updated last year
- Learning to Use Medical Tools with Multi-modal Agent☆155Updated 3 months ago
- [EMNLP'24] Code and data for paper "Med-MoE: Mixture of Domain-Specific Experts for Lightweight Medical Vision-Language Models"☆118Updated last week
- HuatuoGPT2, One-stage Training for Medical Adaption of LLMs. (An Open Medical GPT)☆379Updated 9 months ago
- ClinicalLab: Aligning Agents for Multi-Departmental Clinical Diagnostics in the Real World☆93Updated 9 months ago
- PMC-VQA is a large-scale medical visual question-answering dataset, which contains 227k VQA pairs of 149k images that cover various modal…☆203Updated 5 months ago
- [ICLR'25] MMed-RAG: Versatile Multimodal RAG System for Medical Vision Language Models☆181Updated 4 months ago
- The first Chinese medical large vision-language model designed to integrate the analysis of textual and visual data☆61Updated last year
- Encourage Medical LLM to engage in deep thinking similar to DeepSeek-R1.☆25Updated last month
- GMAI-MMBench: A Comprehensive Multimodal Evaluation Benchmark Towards General Medical AI.☆68Updated 5 months ago
- [ICLR 2025] This is the official repository of our paper "MedTrinity-25M: A Large-scale Multimodal Dataset with Multigranular Annotations…☆337Updated 3 months ago
- Dataset of paper: On the Compositional Generalization of Multimodal LLMs for Medical Imaging☆33Updated last week
- [ICLR‘25 Spotlight] LeFusion: Controllable Pathology Synthesis via Lesion-Focused Diffusion Models☆113Updated 3 months ago
- A generalist foundation model for healthcare capable of handling diverse medical data modalities.☆74Updated last year
- [npj digital medicine] The official codes for "Towards Evaluating and Building Versatile Large Language Models for Medicine"☆65Updated 3 weeks ago
- [Nature Communications] The official codes for "Towards Building Multilingual Language Model for Medicine"☆252Updated 3 weeks ago
- GMAI-VL & GMAI-VL-5.5M: A Large Vision-Language Model and A Comprehensive Multimodal Dataset Towards General Medical AI.☆72Updated last month
- 【ICML 2025 Spotlight】 Official Repo for Paper ‘’HealthGPT : A Medical Large Vision-Language Model for Unifying Comprehension and Generati…☆1,411Updated 3 weeks ago
- ☆32Updated 4 months ago
- MedLSAM: Localize and Segment Anything Model for 3D Medical Images☆491Updated last year
- This repository is aim to reproduce the R1-Zero on medical domain.☆25Updated last month
- [ICLR 2024] FairSeg: A Large-Scale Medical Image Segmentation Dataset for Fairness Learning Using Segment Anything Model with Fair Error-…☆87Updated 5 months ago
- The official code for "Towards Generalist Foundation Model for Radiology by Leveraging Web-scale 2D&3D Medical Data".☆413Updated 3 weeks ago
- [EMNLP'24] RULE: Reliable Multimodal RAG for Factuality in Medical Vision Language Models☆83Updated 5 months ago
- ☆21Updated last month
- Visual Med-Alpaca is an open-source, multi-modal foundation model designed specifically for the biomedical domain, built on the LLaMa-7B.…☆385Updated last year
- The code for paper: PeFoM-Med: Parameter Efficient Fine-tuning on Multi-modal Large Language Models for Medical Visual Question Answering☆49Updated 2 weeks ago
- ☆37Updated last year