PathologyFoundation / plipLinks
Pathology Language and Image Pre-Training (PLIP) is the first vision and language foundation model for Pathology AI (Nature Medicine). PLIP is a large-scale pre-trained model that can be used to extract visual and language features from pathology images and text description. The model is a fine-tuned version of the original CLIP model.
☆346Updated 2 years ago
Alternatives and similar repositories for plip
Users that are interested in plip are comparing it to the libraries listed below
Sorting:
- Vision-Language Pathology Foundation Model - Nature Medicine☆415Updated 6 months ago
- Pathology Foundation Model - Nature Medicine☆572Updated 6 months ago
- Prov-GigaPath: A whole-slide foundation model for digital pathology from real-world data☆533Updated 4 months ago
- Modeling Dense Multimodal Interactions Between Biological Pathways and Histology for Survival Prediction - CVPR 2024☆148Updated 10 months ago
- Multimodal Whole Slide Foundation Model for Pathology☆242Updated 6 months ago
- [NeurIPS 2023 Oral] Quilt-1M: One Million Image-Text Pairs for Histopathology.☆170Updated last year
- ☆330Updated 6 months ago
- Visual Language Pretrained Multiple Instance Zero-Shot Transfer for Histopathology Images - CVPR 2023☆107Updated 2 years ago
- Multimodal Co-Attention Transformer for Survival Prediction in Gigapixel Whole Slide Images - ICCV 2021☆217Updated 3 years ago
- Morphological Prototyping for Unsupervised Slide Representation Learning in Computational Pathology - CVPR 2024☆140Updated 7 months ago
- ☆119Updated last year
- Toolkit for large-scale whole-slide image processing.☆378Updated 3 weeks ago
- DSMIL: Dual-stream multiple instance learning networks for tumor detection in Whole Slide Image☆439Updated last year
- TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification☆431Updated last year
- ☆201Updated 4 months ago
- Official Inplementation of 《WsiCaption: Multiple Instance Generation of Pathology Reports for Gigapixel Whole Slide Images》(MICCAI 2024 O…☆71Updated 4 months ago
- Code associated to the publication: Scaling self-supervised learning for histopathology with masked image modeling, A. Filiot et al., Med…☆158Updated last year
- A curated list of foundation models for vision and language tasks in medical imaging☆277Updated last year
- CellViT: Vision Transformers for Precise Cell Segmentation and Classification☆323Updated 2 months ago
- ☆119Updated last year
- Multimodal prototyping for cancer survival prediction - ICML 2024☆94Updated 7 months ago
- ☆165Updated last year
- Standardized benchmark for computational pathology foundation models.☆102Updated 2 months ago
- Ressources of histopathology datasets☆470Updated 3 months ago
- ☆79Updated 9 months ago
- [Nature Machine Intelligence 2024] Code and evaluation repository for the paper☆121Updated 7 months ago
- [MICCAI 2023 Oral] The official code of "Pathology-and-genomics Multimodal Transformer for Survival Outcome Prediction" (top 9%)☆96Updated 7 months ago
- [CVPR 2024] Feature Re-Embedding: Towards Foundation Model-Level Performance in Computational Pathology☆130Updated 3 weeks ago
- Official repository of Benchmarking Self-Supervised Learning on Diverse Pathology Datasets☆90Updated 2 years ago
- Hierarchical Image Pyramid Transformer - CVPR 2022 (Oral)☆588Updated last year