A novel stain transfer network for generating immunohistochemical image of endometrial cancer
Immunohistochemistry (IHC) is a commonly used histological examination technique. Compared to Hematoxylin and Eosin (H&E) staining, it enables the examination of protein expression and localization in tissues, which is valuable for cancer treatment and prognosis assessment, such as the detection and diagnosis of endometrial cancer. However, IHC involves multiple staining steps, is time-consuming and expensive. One potential solution is to utilize deep learning networks to generate corresponding virtual IHC images from H&E images. However, the similarity of the IHC image generated by the existing methods needs to be further improved. In this work, we propose a novel dual-scale feature fusion (DSFF) generative adversarial network named DSFF-GAN, which comprises a cycle structure-color similarity loss, and DSFF block to constrain the model's training process and enhance its stain transfer capability. In addition, our method incorporates labeling information of positive cell regions as prior knowledge into the network to further improve the evaluation metrics. We train and test our model using endometrial cancer and publicly available breast cancer IHC datasets, and compare it with state-of-the-art methods. Compared to previous methods, our model demonstrates significant improvements in all evaluation metrics on endometrial cancer dataset. The research results show that our method further improves the quality of image generation and has potential value for the future clinical application of virtual IHC images.
Please download datasets from the breast cancer data :AdaptiveSupervisedPatchNCE. The endometrial cancer dataset can be obtained from the corresponding author with reasonable request.
python /datasets/big_image/image_sift.py -- image1 <smallHE_path> -- image2 <smallKi67_path> -- reimage1 <bigHE_path> -- reimage2 <bigKi67_path>
python /datasets/big_image/image_cut.py -- HE_path <bigHE_path> -- Ki67_path <bigKi67_path>
We used ImageJ version 1.52i, a Java-based public image processing software to address this issue. Specifically, for the Ki-67 stained patches, we utilized the The H-DAB vector in the Color Deconvolution plugin to separate the components of histological staining. This separation is necessary because the red, green, and blue channels captured by a color camera cannot easily distinguish these components. Next, we applied the Threshold function in ImageJ software to perform threshold segmentation on the positive layer, resulting in labeled regions of positivity.
python /train/main.py --dataroot <modelroot> --path <path_for_saving_the_models> --batchSize <batchSize>
python /train/test.py --model_path <path_for_saving_the_models> --input_dir <HE_path> --output_dir <Ki67_output_path>
We used three metrics to evaluate the performance of the proposed model. The Structural Similarity Index Measure (SSIM) was used to evaluate the similarity between reference image and generated image. Higher SSIM values indicate better similarity. Contrast Structure Similarity (CSS) was used to evaluate the similarity of samples based on contrast and structure rather than intensity. Peak Signal to Noise Ratio (PSNR) was used measures the impact of noise on image, with higher PSNR values indicating lower image distortion.
python /construction_sim/A_rA_Similarity.py --real_path <real_Ki67_path> --fake_path <fake_Ki67_path>
python /results/mosaics.py --big_image_path <big——real_Ki67_path> --small_image_path<small_fake_Ki67_path>
we conducted quantitative evaluations of the proposed model on endometrial cancer dataset collected from a local hospital and a publicly available breast cancer dataset. To demonstrate the superiority of our approach, we compared it with four state-of-the-art pathological image staining transfer methods, including Cyclegan, PC-StainGAN, Pyramid Pix2Pix, and Adaptive Supervised PatchNCE. All networks were implemented in the same experimental environment, with identical training strategies, data augmentation techniques, and data preprocessing methods. The evaluation results of these methods on two dataset are listed in Table 1. Our model outperforms other models on all evaluation metrics in the endometrial cancer dataset. Compared to the second-ranked method, our model achieves average improvements of 0.83, 0.82, and 0.17 in SSIM, CCS, and PSNR, respectively. Our model has also achieved superior performance on the breast cancer dataset. Compared to the second-ranked method, our approach has shown average improvements of 0.39 and 0.43 in terms of SSIM and CCS, respectively. However, the PSNR is slightly lower than that of the second-ranked method. Visual comparison of generated Ki-67-stained image results using different methods on the endometrial cancer dataset. Visual comparison of generated Ki-67-stained image results using different methods on the breast cancer dataset.
Our code is based on Cyclegan
Yihao Ma([email protected]). If you have any questions, you can contact us directly.