PathologyFoundation / plipLinks

Pathology Language and Image Pre-Training (PLIP) is the first vision and language foundation model for Pathology AI (Nature Medicine). PLIP is a large-scale pre-trained model that can be used to extract visual and language features from pathology images and text description. The model is a fine-tuned version of the original CLIP model.
344Updated 2 years ago

Alternatives and similar repositories for plip

Users that are interested in plip are comparing it to the libraries listed below

Sorting: