XYPB / CLEFTLinks
Official Implementation of "CLEFT: Language-Image Contrastive Learning with Efficient Large Language Model and Prompt Fine-Tuning" on MICCAI 2024.
☆17Updated 5 months ago
Alternatives and similar repositories for CLEFT
Users that are interested in CLEFT are comparing it to the libraries listed below
Sorting:
- ☆21Updated 2 months ago
- OphNet: A Large-Scale Video Benchmark for Ophthalmic Surgical Workflow Understanding☆55Updated last month
- [COLING'25] HGCLIP: Exploring Vision-Language Models with Graph Representations for Hierarchical Understanding☆42Updated 8 months ago
- [ECCV 2024] Teach CLIP to Develop a Number Sense for Ordinal Regression☆12Updated 4 months ago
- MCPL: Multi-modal Collaborative Prompt Learning for Medical Vision-Language Model (Initial Version)☆11Updated last year
- ☆15Updated 10 months ago
- [CVPR 2025] CheXWorld: Exploring Image World Modeling for Radiograph Representation Learning☆21Updated 3 months ago
- [ECCV2024]FALIP: Visual Prompt as Foveal Attention Boosts CLIP Zero-Shot Performance☆14Updated 11 months ago
- ☆37Updated 9 months ago
- ☆29Updated 10 months ago
- ☆18Updated 2 years ago
- Official implementation of the paper "PromptSmooth: Certifying Robustness of Medical Vision-Language Models via Prompt Learning"☆23Updated 3 months ago
- Official implementation of "Meta-Entity Driven Triplet Mining for Aligning Medical Vision-Language Models"☆12Updated 4 months ago
- ☆13Updated 3 years ago
- [IPCAI'24 Best Paper] Advancing Surgical VQA with Scene Graph Knowledge☆44Updated 2 months ago
- ☆36Updated 2 years ago
- [arXiv'24] EVA-X: A foundation model for general chest X-ray analysis with self-supervised learning☆69Updated last year
- Official code for "BoMD: Bag of Multi-label Descriptors for Noisy Chest X-ray Classification"☆26Updated last year
- This is the official code of "Uncovering Prototypical Knowledge for Weakly Open-Vocabulary Semantic Segmentation, NeurIPS 23"☆26Updated last year
- [CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners☆44Updated 2 years ago
- [ICLR 2024] Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models.☆85Updated last year
- This is the official repository for the IEEE TMI paper titled "Large Language Model with Region-Guided Referring and Grounding for CT Rep…☆29Updated last month
- This is the official implementation of the Concept Discovery Models paper.☆13Updated last year
- [CVPR 2024] FairCLIP: Harnessing Fairness in Vision-Language Learning☆84Updated 3 weeks ago
- Multi-Aspect Vision Language Pretraining - CVPR2024☆82Updated 11 months ago
- The repo of ASGMVLP☆16Updated last year
- The official repository of paper named 'A Refer-and-Ground Multimodal Large Language Model for Biomedicine'☆29Updated 9 months ago
- Source code for "MEDIMP: 3D Medical Images with clinical Prompts from limited tabular data for renal transplantation", MIDL 2023, https:/…☆10Updated 2 years ago
- Pytorch Implementation for CVPR 2024 paper: Learn to Rectify the Bias of CLIP for Unsupervised Semantic Segmentation☆50Updated 2 weeks ago
- Official code of paper "GEMeX: A Large-Scale, Groundable, and Explainable Medical VQA Benchmark for Chest X-ray Diagnosis" [ICCV 2025]☆25Updated last month