Harvard-Ophthalmology-AI-Lab / FairCLIPLinks
[CVPR 2024] FairCLIP: Harnessing Fairness in Vision-Language Learning
☆89Updated 2 months ago
Alternatives and similar repositories for FairCLIP
Users that are interested in FairCLIP are comparing it to the libraries listed below
Sorting:
- The code for paper: PeFoM-Med: Parameter Efficient Fine-tuning on Multi-modal Large Language Models for Medical Visual Question Answering☆55Updated 3 months ago
- [ICML'25] MMedPO: Aligning Medical Vision-Language Models with Clinical-Aware Multimodal Preference Optimization☆52Updated 3 months ago
- [NeurIPS'24] CARES: A Comprehensive Benchmark of Trustworthiness in Medical Vision Language Models☆75Updated 9 months ago
- ☆40Updated 10 months ago
- OphNet: A Large-Scale Video Benchmark for Ophthalmic Surgical Workflow Understanding☆56Updated 2 months ago
- ☆21Updated 4 months ago
- ☆68Updated 2 months ago
- [CVPR2024] PairAug: What Can Augmented Image-Text Pairs Do for Radiology?☆30Updated 10 months ago
- ☆87Updated last year
- Multi-Aspect Vision Language Pretraining - CVPR2024☆82Updated last year
- [MICCAI 2024] Can LLMs' Tuning Methods Work in Medical Multimodal Domain?☆17Updated last year
- BenchX: A Unified Benchmark Framework for Medical Vision-Language Pretraining on Chest X-Rays☆38Updated 3 months ago
- [ICLR 2025] MedRegA: Interpretable Bilingual Multimodal Large Language Model for Diverse Biomedical Tasks☆40Updated 2 months ago
- [ICCV-2023] Towards Unifying Medical Vision-and-Language Pre-training via Soft Prompts☆74Updated last year
- [EMNLP'24] RULE: Reliable Multimodal RAG for Factuality in Medical Vision Language Models☆90Updated 9 months ago
- Official repository of paper titled "UniMed-CLIP: Towards a Unified Image-Text Pretraining Paradigm for Diverse Medical Imaging Modalitie…☆128Updated 4 months ago
- ☆93Updated 3 months ago
- ☆55Updated 11 months ago
- A generalist foundation model for healthcare capable of handling diverse medical data modalities.☆83Updated last year
- ☆19Updated 3 months ago
- Offical code of Unlocking the Power of Spatial and Temporal Information in Medical Multimodal Pre-training[ICML 2024]☆24Updated last year
- The official repository of paper named 'A Refer-and-Ground Multimodal Large Language Model for Biomedicine'☆29Updated 10 months ago
- ☆23Updated last year
- A new collection of medical VQA dataset based on MIMIC-CXR. Part of the work 'EHRXQA: A Multi-Modal Question Answering Dataset for Electr…☆88Updated last year
- Official code for the CHIL 2024 paper: "Vision-Language Generative Model for View-Specific Chest X-ray Generation"☆53Updated 9 months ago
- Radiology Report Generation with Frozen LLMs☆95Updated last year
- ☆67Updated 7 months ago
- ☆32Updated 2 months ago
- ☆25Updated 10 months ago
- The official GitHub repository of the AAAI-2024 paper "Bootstrapping Large Language Models for Radiology Report Generation".☆59Updated last year