Giter Site home page Giter Site logo

vison307 / pistoseg Goto Github PK

View Code? Open in Web Editor NEW
11.0 3.0 1.0 7.85 MB

Code Repository for AAAI23 paper "Weakly-Supervised Semantic Segmentation for Histopathology Images Based on Dataset Synthesis and Feature Consistency Constraint"

License: GNU General Public License v3.0

Python 16.36% Jupyter Notebook 83.05% Shell 0.60%

pistoseg's Introduction

PistoSeg

Code Repository for AAAI23 Paper "Weakly-Supervised Semantic Segmentation for Histopathology Images Based on Dataset Synthesis and Feature Consistency Constraint"

If you are from China Mainland, you can refer to the gitee repo https://gitee.com/vison307/PistoSeg.

Installation

Code tested on

  • Ubuntu 18.04
  • A single Nvidia GeForce RTX 3090
  • Python 3.8
  • Pytorch 1.12.1
  • Pytorch Lightning 1.7.1

Please use the following command to install the dependencies:

conda env create -f environment.yaml

Preparing the Data and Weights

  1. Download the WSSS4LUAD dataset and put it in ./data/WSSS4LUAD

  2. Download the BCSS-WSSS dataset and put it in ./data/BCSS-WSSS (Thanks to Han et. al)

  3. Download the ImageNet-pretrained ResNet weight ilsvrc-cls_rna-a1_cls1000_ep-0001.params from SEAM repository and put it in ./weights/ilsvrc-cls_rna-a1_cls1000_ep-0001.params

  4. Download the CAM model's weights res38d.pth from OEEM repository and put it in ./weights/res38d.pth

CAM Generation

Please refer to the OEEM readme in ./OEEM/README.md

Or you can use the pre-generated CAMs (in AAAI23/{dataset}/data/CAM/train.zip) and unzip and put them in ./data/WSSS4LUAD/CAM/train and ./data/BCSS-WSSS/CAM/train.

Train & Test

WSSS4LUAD

  1. Assure that the CAM of the training set is put into ./data/WSSS4LUAD/CAM/train (in .npy format)

  2. Split the validation and test dataset with split_validation.ipynb to produce 224x224 regular-shaped patches

  3. Prepare the synthesized dataset and background mask with create_dataset.ipynb

  4. Run bash run.sh

BCSS-WSSS

  1. Assure that the CAM of the training set is put into ./data/BCSS-WSSS/CAM/train (in .npy format)

  2. Prepare the synthesized dataset with create_dataset_bcss.ipynb

  3. run bash run-bcss.sh

Note

Due to a disaster to the server, the original weights of the results provided in the paper are lost. So the current codes and weights are re-implemented and re-trained. The results are a bit different from the original ones (but slightly improved in mIoU). The overall performance is still similar.

WSSS4LUAD

  • Test mIoU: 0.7530
  • Test fwIoU: 0.7582
  • Test tissue IoU: [0.7991 0.7020 0.7580] - [TUM, STR, NOM]

BCSS-WSSS

  • Test mIoU: 0.7075
  • Test fwIoU: 0.7576
  • Test tissue IoU: [0.8144 0.7446 0.6063 0.6645] - [TUM, STR, LYM, NEC]

Reproducibility

We tried our best to ensure the reproducibility of the results, but since the torch.nn.functional.interpolate function is not deterministic, the results may be different over runs. If you want to fully reproduce the results, you can use the following weights (code: yj84) (preliminary segmentation epoch=*.ckpt, refining ResNet38-RFM.pth, and precise segmentation segmentation_log/epoch=*.ckpt) and intermediate results (Generated CAM data/CAM/train.zip and Refined Masks refine/CAM.zip). Training logs are also provided for reference.

Citation

If you find our code helpful, please cite as follows:

@inproceedings{fang2023weakly, title={Weakly-supervised semantic segmentation for histopathology images based on dataset synthesis and feature consistency constraint}, author={Fang, Zijie and Chen, Yang and Wang, Yifeng and Wang, Zhi and Ji, Xiangyang and Zhang, Yongbing}, booktitle={Proceedings of the AAAI Conference on Artificial Intelligence}, volume={37}, number={1}, pages={606--613}, year={2023} }

pistoseg's People

Contributors

vison307 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

terry01001

pistoseg's Issues

Split dataset

Dear author:
具体如何使用split_validation.ipynb来Split the validation and test dataset呢?

Weights for OEEM

Hello, I would like to reproduce your work, and tried to download CAM model's weights from OEEM repository. But I cannot download it from there. So, can you upload its weight on google drive or something I can access?

对比实验请教

哈喽,这是一项非常有趣且有意义的工作,恭喜你们的论文在AAAI2023上发布!我注意到在论文的对比实验中你们使用了C-CAM 2022 CVPR的实验,目前我同样有工作需要和其对比但是数据集不同并且我在网上并没有找到相关的开源代码,请问你是如何得到这项实验结果的呢?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.