PathologyFoundation / plip
Pathology Language and Image Pre-Training (PLIP) is the first vision and language foundation model for Pathology AI (Nature Medicine). PLIP is a large-scale pre-trained model that can be used to extract visual and language features from pathology images and text description. The model is a fine-tuned version of the original CLIP model.
☆276Updated last year
Related projects ⓘ
Alternatives and complementary repositories for plip
- A vision-language foundation model for computational pathology - Nature Medicine☆280Updated 3 months ago
- Visual Language Pretrained Multiple Instance Zero-Shot Transfer for Histopathology Images - CVPR 2023☆93Updated last year
- Towards a general-purpose foundation model for computational pathology - Nature Medicine☆347Updated 2 months ago
- [NeurIPS 2023 Oral] Quilt-1M: One Million Image-Text Pairs for Histopathology.☆137Updated 10 months ago
- Morphological Prototyping for Unsupervised Slide Representation Learning in Computational Pathology - CVPR 2024☆95Updated 2 months ago
- Modeling Dense Multimodal Interactions Between Biological Pathways and Histology for Survival Prediction - CVPR 2024☆108Updated this week
- Code associated to the publication: Scaling self-supervised learning for histopathology with masked image modeling, A. Filiot et al., Med…☆144Updated 9 months ago
- ☆91Updated 7 months ago
- Ressources of histopathology datasets☆280Updated this week
- CellViT: Vision Transformers for Precise Cell Segmentation and Classification☆236Updated last month
- Prov-GigaPath: A whole-slide foundation model for digital pathology from real-world data☆423Updated last month
- Transcriptomics-guided Slide Representation Learning in Computational Pathology - CVPR 2024☆83Updated last month
- A curated list of foundation models for vision and language tasks in medical imaging☆212Updated 5 months ago
- DSMIL: Dual-stream multiple instance learning networks for tumor detection in Whole Slide Image☆375Updated 6 months ago
- Self-Supervised Vision Transformers Learn Visual Concepts in Histopathology (LMRL Workshop, NeurIPS 2021)☆138Updated 2 years ago
- Multimodal Co-Attention Transformer for Survival Prediction in Gigapixel Whole Slide Images - ICCV 2021☆167Updated 2 years ago
- ☆75Updated 6 months ago
- Context-Aware Survival Prediction using Patch-based Graph Convolutional Networks - MICCAI 2021☆126Updated 6 months ago
- ☆268Updated 6 months ago
- A graph-transformer for whole slide image classification☆152Updated 5 months ago
- List of pathology feature extractors and foundation models☆61Updated last week
- Hierarchical Image Pyramid Transformer - CVPR 2022 (Oral)☆512Updated 8 months ago
- ☆139Updated last year
- Official repository of Benchmarking Self-Supervised Learning on Diverse Pathology Datasets☆77Updated last year
- Developing Generalist Foundation Models from a Multimodal Dataset for 3D Computed Tomography☆195Updated last month
- [CVPR'23] Histopathology Whole Slide Image Analysis with Heterogeneous Graph Representation Learning☆74Updated last year
- This repository contains code to train a self-supervised learning model on chest X-ray images that lack explicit annotations and evaluate…☆178Updated last year
- Analysis of 3D pathology samples using weakly supervised AI - Cell☆92Updated 2 months ago
- The official implementation of GPFM☆40Updated last week
- Multimodal prototyping for cancer survival prediction - ICML 2024☆55Updated 2 months ago