PathologyFoundation / plip
Pathology Language and Image Pre-Training (PLIP) is the first vision and language foundation model for Pathology AI (Nature Medicine). PLIP is a large-scale pre-trained model that can be used to extract visual and language features from pathology images and text description. The model is a fine-tuned version of the original CLIP model.
☆294Updated last year
Alternatives and similar repositories for plip:
Users that are interested in plip are comparing it to the libraries listed below
- A vision-language foundation model for computational pathology - Nature Medicine☆298Updated this week
- Towards a general-purpose foundation model for computational pathology - Nature Medicine☆380Updated this week
- [NeurIPS 2023 Oral] Quilt-1M: One Million Image-Text Pairs for Histopathology.☆140Updated last year
- Code associated to the publication: Scaling self-supervised learning for histopathology with masked image modeling, A. Filiot et al., Med…☆149Updated 11 months ago
- Visual Language Pretrained Multiple Instance Zero-Shot Transfer for Histopathology Images - CVPR 2023☆97Updated last year
- Multimodal Whole Slide Foundation Model for Pathology☆142Updated 3 weeks ago
- Morphological Prototyping for Unsupervised Slide Representation Learning in Computational Pathology - CVPR 2024☆107Updated 4 months ago
- Prov-GigaPath: A whole-slide foundation model for digital pathology from real-world data☆455Updated 3 months ago
- Modeling Dense Multimodal Interactions Between Biological Pathways and Histology for Survival Prediction - CVPR 2024☆122Updated last month
- DSMIL: Dual-stream multiple instance learning networks for tumor detection in Whole Slide Image☆386Updated 8 months ago
- Self-Supervised Vision Transformers Learn Visual Concepts in Histopathology (LMRL Workshop, NeurIPS 2021)☆138Updated 2 years ago
- Multimodal prototyping for cancer survival prediction - ICML 2024☆66Updated 4 months ago
- A curated list of foundation models for vision and language tasks in medical imaging☆233Updated 7 months ago
- Ressources of histopathology datasets☆315Updated 2 months ago
- TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification☆376Updated 8 months ago
- Hierarchical Image Pyramid Transformer - CVPR 2022 (Oral)☆528Updated 9 months ago
- CellViT: Vision Transformers for Precise Cell Segmentation and Classification☆259Updated last week
- Multimodal Co-Attention Transformer for Survival Prediction in Gigapixel Whole Slide Images - ICCV 2021☆180Updated 2 years ago
- ☆283Updated 8 months ago
- Transcriptomics-guided Slide Representation Learning in Computational Pathology - CVPR 2024☆90Updated 3 months ago
- ☆97Updated 9 months ago
- List of pathology feature extractors and foundation models☆78Updated this week
- Context-Aware Survival Prediction using Patch-based Graph Convolutional Networks - MICCAI 2021☆133Updated 8 months ago
- [Nature Machine Intelligence 2024] Code and evaluation repository for the paper☆93Updated 2 months ago
- Official repository of Benchmarking Self-Supervised Learning on Diverse Pathology Datasets☆80Updated last year
- A graph-transformer for whole slide image classification☆157Updated 7 months ago
- [MICCAI 2023 Oral] The official code of "Pathology-and-genomics Multimodal Transformer for Survival Outcome Prediction" (top 9%)☆83Updated 3 months ago
- ☆56Updated 3 weeks ago
- ☆145Updated last year