Giter Site home page Giter Site logo

mahmoodlab / conch Goto Github PK

View Code? Open in Web Editor NEW
142.0 3.0 8.0 1.23 MB

A vision-language foundation model for computational pathology - Nature Medicine

License: Other

Python 76.16% Jupyter Notebook 23.84%
bioimage-analysis bioimage-informatics conch foundation-model health-informatics histopathology medical-imaging nlp-machine-learning pathology vision-language-pathology

conch's Introduction

CONCH ๐Ÿš

A Vision-Language Foundation Model for Computational Pathology

Nature Medicine

Journal Link | Open Access Read Link | Download Model | Cite

Abstract: The accelerated adoption of digital pathology and advances in deep learning have enabled the development of robust models for various pathology tasks across a diverse array of diseases and patient cohorts. However, model training is often difficult due to label scarcity in the medical domain and the model's usage is limited by the specific task and disease for which it is trained. Additionally, most models in histopathology leverage only image data, a stark contrast to how humans teach each other and reason about histopathologic entities. We introduce CONtrastive learning from Captions for Histopathology (CONCH), a visual-language foundation model developed using diverse sources of histopathology images, biomedical text, and notably over 1.17 million image-caption pairs via task-agnostic pretraining. Evaluated on a suite of 14 diverse benchmarks, CONCH can be transferred to a wide range of downstream tasks involving either or both histopathology images and text, achieving state-of-the-art performance on histology image classification, segmentation, captioning, text-to-image, and image-to-text retrieval. CONCH represents a substantial leap over concurrent visual-language pretrained systems for histopathology, with the potential to directly facilitate a wide array of machine learning-based workflows requiring minimal or no further supervised fine-tuning.

What is CONCH?

CONCH (CONtrastive learning from Captions for Histopathology) is a vision language foundation model for histopathology, pretrained on currently the largest histopathology-specific vision-language dataset of 1.17M image caption pairs. Compare to other vision language foundation models, it demonstrates state-of-the-art performance across 14 tasks in computational pathology ranging from image classification, text-to-image, and image-to-text retrieval, captioning, and tissue segmentation.

  • Why use CONCH?: Compared to popular self-supervised encoders for computational pathology that were pretrained only on H&E images, CONCH may produce more performant representations for non-H&E stained images such as IHCs and special stains, and can be used for a wide range of downstream tasks involving either or both histopathology images and text. CONCH also did not use large public histology slide collections such as TCGA, PAIP, GTEX, etc. for pretraining, which are routinely used in benchmark development in computational pathology. Therefore, we make CONCH available for the research community in building and evaluating pathology AI models with minimal risk of data contamination on public benchmarks or private histopathology slide collections.

Installation

First clone the repo and cd into the directory:

git clone https://github.com/mahmoodlab/CONCH.git
cd CONCH

Then create a conda env and install the dependencies:

conda create -n conch python=3.10 -y
conda activate conch
pip install --upgrade pip
pip install -e .

Preparing and loading the model

  1. Request access to the model weights from the Huggingface model page here.

  2. Download the model weights

First create the checkpoints directory inside the root of the repo:

mkdir -p checkpoints/conch/

Then download the pretrained model (pytorch_model.bin) and place it in the CONCH/checkpoints/conch/ directory.

  1. Loading the model

First import the model builder:

from conch.open_clip_custom import create_model_from_pretrained

Now you can load the model as follows (assuming you have the model weights in the CONCH/checkpoints/conch/ directory):

model, preprocess = create_model_from_pretrained("conch_ViT-B-16", checkpoint_path="checkpoints/conch/pytorch_model.bin")

Alternatively, you can use the following command to download and then load the model directly from HF after requesting access:

from conch.open_clip_custom import create_model_from_pretrained
model, preprocess = create_model_from_pretrained('conch_ViT-B-16', "hf_hub:MahmoodLab/conch", hf_auth_token="<your_user_access_token>")

You may need to supply your huggingface user access token via hf_auth_token=<your_token> to create_model_from_pretrained for authentification. See the HF documentation for more details.

Note: while the original CONCH model arechitecture also includes a multimodal decoder trained with the captioning loss of CoCa, as additional precaution to ensure that no proprietary data or Protected Health Information (PHI) is leaked untentionally, we have removed the weights for the decoder from the publicly released CONCH weights. The weights for the text encoder and the vision encoder are intact and therefore the results on all key tasks presented in the paper such as image classification and image-text retrieval are not affected. The ability of CONCH to serve as a general purpose encoder for both histopathology images and pathology-related text also remains unaffected.

Using the model as an vision encoder for histopathology images

Given the importance of pretrained enocders currently for computational pathology tasks, we highlight that after loading the model, you can now use it to embed images as follows:

from PIL import Image
image = Image.open("path_to_image.jpg")
image = preprocess(image).unsqueeze(0)
with torch.inference_mode():
    image_embs = model.encode_image(image, proj_contrast=False, normalize=False)

This will give you the image embeddings before the projection head and normalization, suitable for linear probe or working with WSIs under the multiple-instance learning framework.

For image-text retrieval tasks, you should use the normalized and projected embeddings as follows:

with torch.inference_mode():
    image_embs = model.encode_image(image, proj_contrast=True, normalize=True)

Overview of specific usages

We provide high-level functions for loading the model and using it for inference. For model loading:

from conch.open_clip_custom import create_model_from_pretrained

For tokenizing text:

from conch.open_clip_custom import tokenize, get_tokenizer

For inference:

from conch.downstream.zeroshot_path import zero_shot_classifier, run_mizero, run_zeroshot

Refer to the notebooks below for detailed examples.

More detailed starter code for loading / using the model:

See here to get started with loading and using the model to create embeddings.

Zeroshot classification of a image ROIs/tiles:

See here for a starter simple example.

For a full example using dataloaders and prompt ensembling see here.

Zeroshot classification of a WSIs using MI-Zero:

See here. Note that you will first need to tile the WSIs and convert the tiles into embeddings using the CONCH vision encoder model.

Zeroshot cross-modality retrieval (image / text):

See here for a starter simple example.

Additional Representative Benchmarks

A comprehensive set of benchmarks on zero-shot, few-shot classification are in the paper [2]. Some models were released after our study was in review. For a more comprehensive comparison, we have provided additional results on EBRAINS, PANDA, OncoTree, IHC ER / PR assessment, CRC-100K-Raw, and TCGA Uniform Tumor datasets as a representative set of benchmarks which cover a wide range of tissue types, diseases, difficulty levels (up to 108-classes) and staining (H&E and IHC). Results are reported using ABMIL and KNN (K=20) slide and ROI tasks respectively.

Please refer to the UNI [1] and CONCH [2] papers for more detailed benchmarking.

Slide Benchmarks

Model name Pretraining EBRAINS-C (12 classes, Public) EBRAINS-F (30 classes, Public) PANDA (5 classes, Public) OncoTree-108 (108 classes, Internal) IHC ER / PR Assess. (6 classes, Internal)
Balanced acc. Balanced acc. Quadratic-weight $\kappa$ Balanced acc. Quadratic-weight $\kappa$
UNI [1] Vision 0.883 0.675 0.946 0.538 0.785
CONCH [2] Vision-language 0.868 0.689 0.934 0.515 0.819
Phikon [3] Vision 0.810 0.659 0.950 0.486 0.744
REMEDIS [4] Vision 0.687 0.382 0.932 0.412 0.762
CTransPath [5] Vision 0.666 0.514 0.927 0.399 0.786
Quilt-Net [6] Vision-language 0.728 0.608 0.909 0.389 0.784
PLIP [7] Vision-language 0.683 0.562 0.901 0.369 0.759
ResNet-50 (Tr) [8] ImageNet Transfer 0.302 0.219 0.831 0.148 0.709

ROI Benchmarks

Model name Pretraining CRC-100K-Raw (9 classes, Public) TCGA Uniform Tumor (32 classes, Public)
Balanced acc. Balanced acc.
UNI [1] Vision 0.925 0.595
CONCH [2] Vision-language 0.941 0.556
Phikon [3] Vision 0.845 0.533
REMEDIS [4] Vision 0.908 0.541
CTransPath [5] Vision 0.836 0.463
Quilt-Net [6] Vision-language 0.878 0.359
PLIP [7] Vision-language 0.840 0.370
ResNet-50 [8] ImageNet Transfer 0.797 0.318

Acknowledgements

The project was built on top of amazing repositories such as openclip (used for model training), timm (ViT model implementation) and huggingface transformers (tokenization). We thank the authors and developers for their contribution.

License and Terms of Use

โ“’ Mahmood Lab. This model and associated code are released under the CC-BY-NC-ND 4.0 license and may only be used for non-commercial, academic research purposes with proper attribution. Any commercial use, sale, or other monetization of the CONCH model and its derivatives, which include models trained on outputs from the CONCH model or datasets created from the CONCH model, is prohibited and requires prior approval. Downloading the model requires prior registration on Hugging Face and agreeing to the terms of use. By downloading this model, you agree not to distribute, publish or reproduce a copy of the model. If another user within your organization wishes to use the CONCH model, they must register as an individual user and agree to comply with the terms of use. Users may not attempt to re-identify the deidentified data used to develop the underlying model. If you are a commercial entity, please contact the corresponding author or Mass General Brigham Innovation Office.

Reference

If you find our work useful in your research or if you use parts of this code please consider citing our paper:

Lu, M. Y., Chen, B., Williamson, D. F., Chen, R. J., Liang, I., Ding, T., ... & Mahmood, F. (2024). A visual-language foundation model for computational pathology. Nature Medicine.

@article{lu2024avisionlanguage,
  title={A visual-language foundation model for computational pathology},
  author={Lu, Ming Y and Chen, Bowen and Williamson, Drew FK and Chen, Richard J and Liang, Ivy and Ding, Tong and Jaume, Guillaume and Odintsov, Igor and Le, Long Phi and Gerber, Georg and others},
  journal={Nature Medicine},
  pages={863โ€“874},
  volume={30},
  year={2024},
  publisher={Nature Publishing Group}
}

Additionally, if you find MI-Zero useful, please also consider citing the corresponding CVPR 2023 article:

Lu, M.Y., Chen, B., Zhang, A., Williamson, D.F., Chen, R.J., Ding, T., Le, L.P., Chuang, Y.S. and Mahmood, F., 2023. Visual Language Pretrained Multiple Instance Zero-Shot Transfer for Histopathology Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 19764-19775).

@InProceedings{Lu_2023_CVPR,
    author    = {Lu, Ming Y. and Chen, Bowen and Zhang, Andrew and Williamson, Drew F. K. and Chen, Richard J. and Ding, Tong and Le, Long Phi and Chuang, Yung-Sung and Mahmood, Faisal},
    title     = {Visual Language Pretrained Multiple Instance Zero-Shot Transfer for Histopathology Images},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023},
    pages     = {19764-19775}
}

conch's People

Contributors

faisalml avatar fedshyvana avatar richarizardd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

conch's Issues

Publication of the datasets

Amazing work! Congratulations!

Could you share the information of what slides did you use a test set in downstream tasks? I am interested in slide-level predictions for the TCGA datasets.

Thanks in advance.

Bibtex entry inaccurate

Dear authors,

unfortunately the bibtex snippet you provided is inaccurate, from what I can tell from the Nat Med journal article. It should be:

@article{lu2024avisionlanguage,
  title={A visual-language foundation model for computational pathology},
  author={Lu, Ming Y and Chen, Bowen and Williamson, Drew FK and Chen, Richard J and Liang, Ivy and Ding, Tong and Jaume, Guillaume and Odintsov, Igor and Le, Long Phi and Gerber, Georg and others},
  journal={Nature Medicine},
  pages={863โ€“874},
  volume={30},
  year={2024},
  publisher={Nature Publishing Group}
}

Best,

Marc

image-caption pairs

Excellent work. May I ask if you have plans to release 1.17 million image-caption pairs?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.