Qsingle / open-medical-r1
This repository is aim to reproduce the R1-Zero on medical domain.
☆21Updated last week
Alternatives and similar repositories for open-medical-r1:
Users that are interested in open-medical-r1 are comparing it to the libraries listed below
- Learning to Use Medical Tools with Multi-modal Agent☆136Updated 2 months ago
- GMAI-VL & GMAI-VL-5.5M: A Large Vision-Language Model and A Comprehensive Multimodal Dataset Towards General Medical AI.☆66Updated last week
- The first Chinese medical large vision-language model designed to integrate the analysis of textual and visual data☆60Updated last year
- Encourage Medical LLM to engage in deep thinking similar to DeepSeek-R1.☆25Updated this week
- [ICLR 2025] MedRegA: Interpretable Bilingual Multimodal Large Language Model for Diverse Biomedical Tasks☆29Updated 2 weeks ago
- Dataset of paper: On the Compositional Generalization of Multimodal LLMs for Medical Imaging☆32Updated 3 months ago
- GMAI-MMBench: A Comprehensive Multimodal Evaluation Benchmark Towards General Medical AI.☆65Updated 4 months ago
- MC-CoT implementation code☆14Updated 5 months ago
- The official repository of the paper 'Towards a Multimodal Large Language Model with Pixel-Level Insight for Biomedicine'☆47Updated 3 months ago
- ☆42Updated 2 weeks ago
- 中文医学多模态大模型 Large Chinese Language-and-Vision Assistant for BioMedicine☆80Updated 11 months ago
- [npj digital medicine] The official codes for "Towards Evaluating and Building Versatile Large Language Models for Medicine"☆58Updated 2 months ago
- MedXpertQA: Benchmarking Expert-Level Medical Reasoning and Understanding☆56Updated last month
- The official repository of paper named 'A Refer-and-Ground Multimodal Large Language Model for Biomedicine'☆23Updated 5 months ago
- ☆23Updated 5 months ago
- Official implementation of MedCLIP-SAMv2☆65Updated 2 weeks ago
- Repository for Mixture of Multimodal Experts☆36Updated 8 months ago
- ☆74Updated 11 months ago
- A Python tool to evaluate the performance of VLM on the medical domain.☆60Updated last week
- ☆21Updated 4 months ago
- CVPR 2024 (Highlight)☆132Updated 6 months ago
- ☆29Updated 3 months ago
- PMC-VQA is a large-scale medical visual question-answering dataset, which contains 227k VQA pairs of 149k images that cover various modal…☆196Updated 4 months ago
- The code for paper: PeFoM-Med: Parameter Efficient Fine-tuning on Multi-modal Large Language Models for Medical Visual Question Answering☆45Updated 5 months ago
- Med-R1: Reinforcement Learning for Generalizable Medical Reasoning in Vision-Language Models☆22Updated last month
- [EMNLP'24] RULE: Reliable Multimodal RAG for Factuality in Medical Vision Language Models☆79Updated 4 months ago
- ☆42Updated last year
- ☆40Updated 10 months ago
- This is the official repo for "Self-Prompting Large Vision Models for Few-Shot Medical Image Segmentation"☆91Updated last year
- ☆48Updated this week