Hritikbansal / medmaxLinks
MedMax: Mixed-Modal Instruction Tuning for Training Biomedical Assistants
☆39Updated 2 months ago
Alternatives and similar repositories for medmax
Users that are interested in medmax are comparing it to the libraries listed below
Sorting:
- [NeurIPS'24] CARES: A Comprehensive Benchmark of Trustworthiness in Medical Vision Language Models☆77Updated 11 months ago
- [ML4H'25] m1: Unleash the Potential of Test-Time Scaling for Medical Reasoning in Large Language Models☆46Updated 7 months ago
- [ICML'25] MMedPO: Aligning Medical Vision-Language Models with Clinical-Aware Multimodal Preference Optimization☆62Updated 5 months ago
- [EMNLP 2025] Med-PRM: Medical Reasoning Models with Stepwise, Guideline-verified Process Rewards☆49Updated 2 months ago
- [CVPR 2025] MicroVQA eval and 🤖RefineBot code for "MicroVQA: A Multimodal Reasoning Benchmark for Microscopy-Based Scientific Research"…☆27Updated last month
- [CVPR 2025] CheXWorld: Exploring Image World Modeling for Radiograph Representation Learning☆29Updated 7 months ago
- Code for the paper "RADAR: Enhancing Radiology Report Generation with Supplementary Knowledge Injection" (ACL'25).☆31Updated 4 months ago
- ☆48Updated 9 months ago
- MAM: ModularMulti-Agent Framework for Multi-Modal Medical Diagnosis via Role-Specialized Collaboration☆22Updated 5 months ago
- [TMLR 25] SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆142Updated last month
- X-Reasoner: Towards Generalizable Reasoning Across Modalities and Domains☆49Updated 6 months ago
- [CVPR 2024] FairCLIP: Harnessing Fairness in Vision-Language Learning☆94Updated 4 months ago
- ☆32Updated last year
- ☆30Updated 3 months ago
- DoctorAgent-RL: A Multi-Agent Collaborative Reinforcement Learning System for Multi-Turn Clinical Dialogue☆40Updated last month
- [CVPR 2025] BIOMEDICA: An Open Biomedical Image-Caption Archive, Dataset, and Vision-Language Models Derived from Scientific Literature☆85Updated 8 months ago
- Evaluation and dataset construction code for the CVPR 2025 paper "Vision-Language Models Do Not Understand Negation"☆41Updated 7 months ago
- [ICCV 2023] ViLLA: Fine-grained vision-language representation learning from real-world data☆46Updated 2 years ago
- CLIP-MoE: Mixture of Experts for CLIP☆49Updated last year
- [CVPR 2024] Official Code for the Paper "Compositional Chain-of-Thought Prompting for Large Multimodal Models"☆142Updated last year
- The code for paper: PeFoM-Med: Parameter Efficient Fine-tuning on Multi-modal Large Language Models for Medical Visual Question Answering☆56Updated 5 months ago
- [ACL 2025] Exploring Compositional Generalization of Multimodal LLMs for Medical Imaging☆38Updated 5 months ago
- [EMNLP'24] Code and data for paper "Med-MoE: Mixture of Domain-Specific Experts for Lightweight Medical Vision-Language Models"☆148Updated 4 months ago
- ☆107Updated 8 months ago
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆65Updated 5 months ago
- MC-CoT implementation code☆20Updated 5 months ago
- ☆70Updated 4 months ago
- ReasonMed: A 370K Multi-Agent Generated Dataset for Advancing Medical Reasoning☆97Updated last month
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆121Updated 2 months ago
- [ACM MM 2025 🔥🔥 ] MIRA: A first-of-its-kind medical RAG framework that fuses image features and retrieved knowledge with dynamic contex…☆17Updated 3 months ago