PathologyFoundation / plip
Pathology Language and Image Pre-Training (PLIP) is the first vision and language foundation model for Pathology AI (Nature Medicine). PLIP is a large-scale pre-trained model that can be used to extract visual and language features from pathology images and text description. The model is a fine-tuned version of the original CLIP model.
☆314Updated last year
Alternatives and similar repositories for plip:
Users that are interested in plip are comparing it to the libraries listed below
- A vision-language foundation model for computational pathology - Nature Medicine☆329Updated last week
- [NeurIPS 2023 Oral] Quilt-1M: One Million Image-Text Pairs for Histopathology.☆157Updated last year
- Multimodal Whole Slide Foundation Model for Pathology☆185Updated last week
- Visual Language Pretrained Multiple Instance Zero-Shot Transfer for Histopathology Images - CVPR 2023☆101Updated last year
- Toolkit for large-scale whole-slide image processing.☆145Updated this week
- A general-purpose foundation model for computational pathology - Nature Medicine☆429Updated this week
- Prov-GigaPath: A whole-slide foundation model for digital pathology from real-world data☆487Updated this week
- Modeling Dense Multimodal Interactions Between Biological Pathways and Histology for Survival Prediction - CVPR 2024☆129Updated 4 months ago
- Morphological Prototyping for Unsupervised Slide Representation Learning in Computational Pathology - CVPR 2024☆121Updated last month
- Code associated to the publication: Scaling self-supervised learning for histopathology with masked image modeling, A. Filiot et al., Med…☆148Updated last year
- ☆107Updated 11 months ago
- A curated list of foundation models for vision and language tasks in medical imaging☆244Updated 9 months ago
- ☆302Updated 10 months ago
- Multimodal prototyping for cancer survival prediction - ICML 2024☆79Updated last month
- Hierarchical Image Pyramid Transformer - CVPR 2022 (Oral)☆550Updated last year
- ☆137Updated last month
- Transcriptomics-guided Slide Representation Learning in Computational Pathology - CVPR 2024☆93Updated 5 months ago
- List of pathology feature extractors and foundation models☆111Updated last month
- This repository contains code to train a self-supervised learning model on chest X-ray images that lack explicit annotations and evaluate…☆187Updated last year
- CellViT: Vision Transformers for Precise Cell Segmentation and Classification☆282Updated 2 months ago
- ☆92Updated 10 months ago
- ☆10Updated last month
- Self-Supervised Vision Transformers Learn Visual Concepts in Histopathology (LMRL Workshop, NeurIPS 2021)☆138Updated 2 years ago
- [MICCAI 2023 Oral] The official code of "Pathology-and-genomics Multimodal Transformer for Survival Outcome Prediction" (top 9%)☆92Updated 3 weeks ago
- Official Inplementation of 《WsiCaption: Multiple Instance Generation of Pathology Reports for Gigapixel Whole Slide Images》(MICCAI 2024 O…☆52Updated last month
- Developing Generalist Foundation Models from a Multimodal Dataset for 3D Computed Tomography☆252Updated 5 months ago
- ViLa-MIL: Dual-scale Vision-Language Multiple Instance Learning for Whole Slide Image Classification (CVPR 2024)☆65Updated last month
- ☆61Updated 3 months ago
- The official implementation of GPFM☆52Updated last month
- Codebase for Quilt-LLaVA☆48Updated 8 months ago