FreedomIntelligence / HuatuoGPT-VisionLinks
Medical Multimodal LLMs
☆372Updated 9 months ago
Alternatives and similar repositories for HuatuoGPT-Vision
Users that are interested in HuatuoGPT-Vision are comparing it to the libraries listed below
Sorting:
- The official codes for "PMC-CLIP: Contrastive Language-Image Pre-training using Biomedical Documents"☆232Updated last year
- An interpretable large language model (LLM) for medical diagnosis.☆158Updated last year
- [EMNLP'24] Code and data for paper "Med-MoE: Mixture of Domain-Specific Experts for Lightweight Medical Vision-Language Models"☆154Updated 7 months ago
- Learning to Use Medical Tools with Multi-modal Agent☆228Updated 11 months ago
- [NeurIPS 2025] ClinicalLab: Aligning Agents for Multi-Departmental Clinical Diagnostics in the Real World☆125Updated last year
- [ICCV 2025] Medical World Model☆112Updated 6 months ago
- [ICLR‘25 Spotlight] LeFusion: Controllable Pathology Synthesis via Lesion-Focused Diffusion Models☆141Updated 5 months ago
- MedEvalKit: A Unified Medical Evaluation Framework☆208Updated 3 months ago
- [ICLR'25] MMed-RAG: Versatile Multimodal RAG System for Medical Vision Language Models☆298Updated last year
- HuatuoGPT2, One-stage Training for Medical Adaption of LLMs. (An Open Medical GPT)☆403Updated last year
- PMC-VQA is a large-scale medical visual question-answering dataset, which contains 227k VQA pairs of 149k images that cover various modal…☆225Updated last year
- The first Chinese medical large vision-language model designed to integrate the analysis of textual and visual data☆64Updated 2 years ago
- 中文医学多模态大模型 Large Chinese Language-and-Vision Assistant for BioMedicine☆101Updated last year
- GMAI-MMBench: A Comprehensive Multimodal Evaluation Benchmark Towards General Medical AI.☆81Updated last year
- [ICLR 2025] This is the official repository of our paper "MedTrinity-25M: A Large-scale Multimodal Dataset with Multigranular Annotations…☆398Updated 6 months ago
- A generalist foundation model for healthcare capable of handling diverse medical data modalities.☆92Updated last year
- The official repository of the paper 'Towards a Multimodal Large Language Model with Pixel-Level Insight for Biomedicine'☆119Updated last year
- [ACL 2025] Exploring Compositional Generalization of Multimodal LLMs for Medical Imaging☆38Updated 8 months ago
- [ICLR 2024] FairSeg: A Large-Scale Medical Image Segmentation Dataset for Fairness Learning Using Segment Anything Model with Fair Error-…☆94Updated last year
- [ICML 2025] MedXpertQA: Benchmarking Expert-Level Medical Reasoning and Understanding☆142Updated 6 months ago
- Open-sourced code of miniGPT-Med☆139Updated last year
- A Curated Benchmark Repository for Medical Vision-Language Models☆177Updated 2 weeks ago
- [EMNLP'24] RULE: Reliable Multimodal RAG for Factuality in Medical Vision Language Models☆96Updated last year
- 【ICML 2025 Spotlight】 Official Repo for Paper ‘’HealthGPT : A Medical Large Vision-Language Model for Unifying Comprehension and Generati…☆1,589Updated 3 months ago
- GMAI-VL & GMAI-VL-5.5M: A Large Vision-Language Model and A Comprehensive Multimodal Dataset Towards General Medical AI.☆85Updated 8 months ago
- [Nature Communications] The official codes for "Towards Building Multilingual Language Model for Medicine"☆274Updated 9 months ago
- Med-R1: Reinforcement Learning for Generalizable Medical Reasoning in Vision-Language Models☆104Updated 7 months ago
- Foundation models based medical image analysis☆208Updated last month
- Official Repo for Paper ‘’EyecareGPT: Boosting Comprehensive Ophthalmology Understanding with Tailored Dataset, Benchmark and Model‘’☆58Updated 9 months ago
- [ICLR 2025] MedRegA: Interpretable Bilingual Multimodal Large Language Model for Diverse Biomedical Tasks☆45Updated 3 months ago