Giter Site home page Giter Site logo

sraashis / deepdyn Goto Github PK

View Code? Open in Web Editor NEW
63.0 5.0 22.0 303.42 MB

pytorch implementation of paper https://www.frontiersin.org/articles/10.3389/fcomp.2020.00035/full

License: MIT License

Python 46.59% Jupyter Notebook 53.41%
computer-vision convolutional-neural-networks graph-algorithms image-procesing u-net unet vessel-segmentation machine-learning fundus-image pytorch

deepdyn's Introduction

Implementation of Deep Dynamic Networks for Retinal Vessel Segmentation (https://arxiv.org/abs/1903.07803)

A pytorch based framework for medical image processing with Convolutional Neural Network.

Along with example of unet for DRIVE dataset segmentation [1]. DRIVE dataset is composed of 40 retinal fundus images.

Update Jul 30, 2020: Please check a better, and pip installable version of the framework with an example over Here.

Required dependencies

We need python3, numpy, pandas, pytorch, torchvision, matplotlib and PILLOW packages

pip install -r deepdyn/assets/requirements.txt

Flow

Project Structure

Dataset check

Original image and respective ground-truth image. Ground-truth is a binary image with each vessel pixel(white) 255 and background(black) 0. Sample DRIVE image

Unet

Usage

Example main.py

import testarch.unet as unet
import testarch.unet.runs as r_unet
import testarch.miniunet as mini_unet
import testarch.miniunet.runs as r_miniunet
import torchvision.transforms as tmf


transforms = tmf.Compose([
    tmf.ToPILImage(),
    tmf.ToTensor()
])

if __name__ == "__main__":
    unet.run([r_unet.DRIVE], transforms)
    mini_unet.run([r_miniunet.DRIVE], transforms)

Where testarch.unet.runs file consist a predefined configuration DRIVE with all necessary parameters.

import os
sep = os.sep
DRIVE = {
    'Params': {
        'num_channels': 1,
        'num_classes': 2,
        'batch_size': 4,
        'epochs': 250,
        'learning_rate': 0.001,
        'patch_shape': (388, 388),
        'patch_offset': (150, 150),
        'expand_patch_by': (184, 184),
        'use_gpu': True,
        'distribute': True,
        'shuffle': True,
        'log_frequency': 5,
        'validation_frequency': 1,
        'mode': 'train',
        'parallel_trained': False,
    },
    'Dirs': {
        'image': 'data' + sep + 'DRIVE' + sep + 'images',
        'mask': 'data' + sep + 'DRIVE' + sep + 'mask',
        'truth': 'data' + sep + 'DRIVE' + sep + 'manual',
        'logs': 'logs' + sep + 'DRIVE' + sep + 'UNET',
        'splits_json': 'data' + sep + 'DRIVE' + sep + 'splits'
    },

    'Funcs': {
        'truth_getter': lambda file_name: file_name.split('_')[0] + '_manual1.gif',
        'mask_getter': lambda file_name: file_name.split('_')[0] + '_mask.gif',
        'dparm': lambda x: np.random.choice(np.arange(1, 101, 1), 2)
    }
}

Similarly, testarch.miniunet.runs file consist a predefined configuration DRIVE with all necessary parameters. NOTE: Make sure it picks up probability-maps from the logs of previous run.

import os
sep = os.sep
DRIVE = {
    'Params': {
        'num_channels': 2,
        'num_classes': 2,
        'batch_size': 4,
        'epochs': 100,
        'learning_rate': 0.001,
        'patch_shape': (100, 100),
        'expand_patch_by': (40, 40)
        'use_gpu': True,
        'distribute': True,
        'shuffle': True,
        'log_frequency': 20,
        'validation_frequency': 1,
        'mode': 'train',
        'parallel_trained': False
    },
    'Dirs': {
        'image': 'data' + sep + 'DRIVE' + sep + 'images',
        'image_unet': 'logs' + sep + 'DRIVE' + sep + 'UNET',
        'mask': 'data' + sep + 'DRIVE' + sep + 'mask',
        'truth': 'data' + sep + 'DRIVE' + sep + 'manual',
        'logs': 'logs' + sep + 'DRIVE' + sep + 'MINI-UNET',
        'splits_json': 'data' + sep + 'DRIVE' + sep + 'splits'
    },

    'Funcs': {
        'truth_getter': lambda file_name: file_name.split('_')[0] + '_manual1.gif',
        'mask_getter': lambda file_name: file_name.split('_')[0] + '_mask.gif'
    }
}
  • num_channels: Input channels to the CNN. We are only feeding the green channel to unet.
  • num_classes: Output classes from CNN. We have vessel, background.
  • patch_shape, expand_patch_by: Unet takes 388 * 388 patch but also looks at 184 pixel on each dimension equally to make it 572 * 572. We mirror image if we run to image edges when expanding. So 572 * 572 goes in 388 * 388 * 2 comes out.
  • patch_offset: Overlap between two input patches. We get more data doing this.
  • distribute: Uses all gpu in parallel if set to True. [WARN]torch.cuda.set_device(1) Mustn't be done if set to True.
  • shuffle: Shuffle train data after every epoch.
  • log_frequency: Just print log after this number of batches with average scores. No rocket science :).
  • validation_frequency: Do validation after this number of epochs. We also persist the best performing model.
  • mode: train/test.
  • parallel_trained: If a resumed model was parallel trained or not.
  • logs: Dir for all logs
  • splits_json: A directory that consist of json files with list of files with keys 'train', 'test' 'validation'. (https://github.com/sraashis/deepdyn/blob/master/utils/auto_split.py) takes a folder with all images and does that automatically. This is handy when we want to do k-fold cross validation. We jsut have to generate such k json files and put in splits_json folder.
  • truth_getter, mask_getter: A custom function that maps input_image to its ground_truth and mask respectively.

Sample log

workstation$ python main.py 
Total Params: 31042434
### SPLIT FOUND:  data/DRIVE/splits/UNET-DRIVE.json Loaded
Patches: 135
Patches: 9
Patches: 9
Patches: 9
Patches: 9
Patches: 9
Training...
Epochs[1/40] Batch[5/34] loss:0.72354 pre:0.326 rec:0.866 f1:0.473 acc:0.833
Epochs[1/40] Batch[10/34] loss:0.34364 pre:0.584 rec:0.638 f1:0.610 acc:0.912
Epochs[1/40] Batch[15/34] loss:0.22827 pre:0.804 rec:0.565 f1:0.664 acc:0.939
Epochs[1/40] Batch[20/34] loss:0.19549 pre:0.818 rec:0.629 f1:0.711 acc:0.947
Epochs[1/40] Batch[25/34] loss:0.17726 pre:0.713 rec:0.741 f1:0.727 acc:0.954
Epochs[1/40] Batch[30/34] loss:0.16564 pre:0.868 rec:0.691 f1:0.770 acc:0.946
Running validation..
21_training.tif  PRF1A [0.66146, 0.37939, 0.4822, 0.93911]
39_training.tif  PRF1A [0.79561, 0.28355, 0.41809, 0.93219]
37_training.tif  PRF1A [0.78338, 0.47221, 0.58924, 0.94245]
35_training.tif  PRF1A [0.83836, 0.45788, 0.59228, 0.94534]
38_training.tif  PRF1A [0.64682, 0.26709, 0.37807, 0.92416]
Score improved:  0.0 to 0.49741 BEST CHECKPOINT SAVED
Epochs[2/40] Batch[5/34] loss:0.41760 pre:0.983 rec:0.243 f1:0.389 acc:0.916
Epochs[2/40] Batch[10/34] loss:0.27762 pre:0.999 rec:0.025 f1:0.049 acc:0.916
Epochs[2/40] Batch[15/34] loss:0.25742 pre:0.982 rec:0.049 f1:0.093 acc:0.886
Epochs[2/40] Batch[20/34] loss:0.23239 pre:0.774 rec:0.421 f1:0.545 acc:0.928
Epochs[2/40] Batch[25/34] loss:0.23667 pre:0.756 rec:0.506 f1:0.607 acc:0.930
Epochs[2/40] Batch[30/34] loss:0.19529 pre:0.936 rec:0.343 f1:0.502 acc:0.923
Running validation..
21_training.tif  PRF1A [0.95381, 0.45304, 0.6143, 0.95749]
39_training.tif  PRF1A [0.84353, 0.48988, 0.6198, 0.94837]
37_training.tif  PRF1A [0.8621, 0.60001, 0.70757, 0.95665]
35_training.tif  PRF1A [0.86854, 0.64861, 0.74263, 0.96102]
38_training.tif  PRF1A [0.93073, 0.28781, 0.43966, 0.93669]
Score improved:  0.49741 to 0.63598 BEST CHECKPOINT SAVED
...

Results

The network is trained for 40 epochs with 15 training images, 5 validation images and 20 test images. Training_Loss Training_Scores Figure above is the training cross-entropy loss, F1, and accuracy. Precision-Recall color-Map Figure above is the precision-recall map for training and validation respectively with color being the training iterations. Validation_scores Figure above is the validation F1 and Accuracy. Test scores and result Figure on left is the test result on the test set after training and validation. Right one the is the segmentation result on one of the test images.

Thank you! ❤

References

  1. J. Staal, M. Abramoff, M. Niemeijer, M. Viergever, and B. van Ginneken, “Ridge based vessel segmentation in color images of the retina,” IEEE Transactions on Medical Imaging 23, 501–509 (2004)
  2. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” inMICCAI,(2015)
  3. Dynamic Deep Networks for Retinal Vessel Segmentation, https://arxiv.org/abs/1903.07803

deepdyn's People

Contributors

aturegsu avatar restrada1 avatar sraashis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

deepdyn's Issues

unclear

  • Architecture
  1. Combination of unet and pixelwise
  • Merging final patches
  • Last patch issue
  • Last layer which 0 or 1 to take
  • Rotation of image and ground truth for more data but need same size image.

Training

  • Smart initialization
  • Less data

Execution error

err
Hi,
Could you help me in executing this code. It stops on 150 epoc.

Thanks

unet_trainer evaluate() accuracy log

unet_trainer evaluate() accuracy has to be adjusted based on new tensore shape as we have now an image as output and label as opposed to one output and one label in previously used simple network.

Getting error when installing dependencies

@sraashis

After making a new environment with conda and insatalling pythion 3.7 on that, I try to install required dependencies but getting the following error.

Screenshot from 2020-06-05 18-07-20

Am I supposed to install any other packages before running the following command?
pip install -r deepdyn/assets/requirements.txt

weight file

thank you for the great code
how can I get weight file?

Execution problem

image
Can you help me in executing this code? I am having a problem because epochs do not start

strange output

when I run this code the following output is given
C:\Users\HI\Desktop\ature-master>python main.py
Total Params: 31042434
[0, 0, 0, 0]
what is it, how I get the correct output
I am using windows

Mask not applied & PicklingError

Hi again,

I tried to run the main.py but the logs do not look good...

**Click to open run log**

Total Params: 1864322
Total Params: 31042434
### SPLIT FOUND:  data\STARE\splits\STARE_0.json Loaded
### Mask not applied.  im0077.ppm
### Mask not applied.  im0081.ppm
### Mask not applied.  im0236.ppm
### Mask not applied.  im0255.ppm
### Mask not applied.  im0005.ppm
### Mask not applied.  im0003.ppm
### Mask not applied.  im0235.ppm
### Mask not applied.  im0139.ppm
### Mask not applied.  im0319.ppm
### Mask not applied.  im0001.ppm
### Mask not applied.  im0163.ppm
### Mask not applied.  im0044.ppm
Patches: 144
### Mask not applied.  im0002.ppm
Patches: 12
### Mask not applied.  im0162.ppm
Patches: 12
### Mask not applied.  im0291.ppm
Patches: 12
### Mask not applied.  im0082.ppm
Patches: 12
Training...
### SPLIT FOUND:  data\STARE\splits\STARE_1.json Loaded
Traceback (most recent call last):
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\unet\__init__.py", line 52, in run
	epoch_run=trainer.epoch_ce_loss)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\torchtrainer\torchtrainer.py", line 84, in train
	epoch_run(epoch=epoch, data_loader=data_loader, logger=self.train_logger)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\torchtrainer\torchtrainer.py", line 221, in epoch_ce_loss
	for i, data in enumerate(kw['data_loader'], 1):
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\site-packages\torch\utils\data\dataloader.py", line 279, in __iter__
	return _MultiProcessingDataLoaderIter(self)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\site-packages\torch\utils\data\dataloader.py", line 719, in __init__
	w.start()
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\process.py", line 112, in start
	self._popen = self._Popen(self)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\context.py", line 223, in _Popen
	return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\context.py", line 322, in _Popen
	return Popen(process_obj)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
	reduction.dump(process_obj, to_child)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\reduction.py", line 60, in dump
	ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <function <lambda> at 0x000002DA8D01B318>: attribute lookup <lambda> on testarch.unet.runs failed
### Mask not applied.  im0324.ppm
### Mask not applied.  im0239.ppm
### Mask not applied.  im0240.ppm
### Mask not applied.  im0004.ppm
### Mask not applied.  im0005.ppm
### Mask not applied.  im0003.ppm
### Mask not applied.  im0235.ppm
### Mask not applied.  im0139.ppm
### Mask not applied.  im0319.ppm
### Mask not applied.  im0001.ppm
### Mask not applied.  im0163.ppm
### Mask not applied.  im0044.ppm
Patches: 144
### Mask not applied.  im0077.ppm
Patches: 12
### Mask not applied.  im0081.ppm
Patches: 12
### Mask not applied.  im0236.ppm
Patches: 12
### Mask not applied.  im0255.ppm
Patches: 12
Training...
Traceback (most recent call last):
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\unet\__init__.py", line 52, in run
	epoch_run=trainer.epoch_ce_loss)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\torchtrainer\torchtrainer.py", line 84, in train
	epoch_run(epoch=epoch, data_loader=data_loader, logger=self.train_logger)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\torchtrainer\torchtrainer.py", line 221, in epoch_ce_loss
	for i, data in enumerate(kw['data_loader'], 1):
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\site-packages\torch\utils\data\dataloader.py", line 279, in __iter__
	return _MultiProcessingDataLoaderIter(self)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\site-packages\torch\utils\data\dataloader.py", line 719, in __init__
	w.start()
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\process.py", line 112, in start
	self._popen = self._Popen(self)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\context.py", line 223, in _Popen
	return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\context.py", line 322, in _Popen
	return Popen(process_obj)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
	reduction.dump(process_obj, to_child)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\reduction.py", line 60, in dump
	ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <function <lambda> at 0x000002DA8D01B318>: attribute lookup <lambda> on testarch.unet.runs failed
### SPLIT FOUND:  data\STARE\splits\STARE_2.json Loaded
### Mask not applied.  im0324.ppm
### Mask not applied.  im0239.ppm
### Mask not applied.  im0240.ppm
### Mask not applied.  im0004.ppm
### Mask not applied.  im0002.ppm
### Mask not applied.  im0162.ppm
### Mask not applied.  im0291.ppm
### Mask not applied.  im0082.ppm
### Mask not applied.  im0319.ppm
### Mask not applied.  im0001.ppm
### Mask not applied.  im0163.ppm
### Mask not applied.  im0044.ppm
Patches: 144
### Mask not applied.  im0005.ppm
Patches: 12
### Mask not applied.  im0003.ppm
Patches: 12
### Mask not applied.  im0235.ppm
Patches: 12
### Mask not applied.  im0139.ppm
Patches: 12
Training...
Traceback (most recent call last):
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\unet\__init__.py", line 52, in run
	epoch_run=trainer.epoch_ce_loss)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\torchtrainer\torchtrainer.py", line 84, in train
	epoch_run(epoch=epoch, data_loader=data_loader, logger=self.train_logger)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\torchtrainer\torchtrainer.py", line 221, in epoch_ce_loss
	for i, data in enumerate(kw['data_loader'], 1):
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\site-packages\torch\utils\data\dataloader.py", line 279, in __iter__
	return _MultiProcessingDataLoaderIter(self)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\site-packages\torch\utils\data\dataloader.py", line 719, in __init__
	w.start()
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\process.py", line 112, in start
	self._popen = self._Popen(self)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\context.py", line 223, in _Popen
	return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\context.py", line 322, in _Popen
	return Popen(process_obj)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
	reduction.dump(process_obj, to_child)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\reduction.py", line 60, in dump
	ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <function <lambda> at 0x000002DA8D01B318>: attribute lookup <lambda> on testarch.unet.runs failed
### SPLIT FOUND:  data\STARE\splits\STARE_3.json Loaded
### Mask not applied.  im0324.ppm
### Mask not applied.  im0239.ppm
### Mask not applied.  im0240.ppm
### Mask not applied.  im0004.ppm
### Mask not applied.  im0002.ppm
### Mask not applied.  im0162.ppm
### Mask not applied.  im0291.ppm
### Mask not applied.  im0082.ppm
### Mask not applied.  im0077.ppm
### Mask not applied.  im0081.ppm
### Mask not applied.  im0236.ppm
### Mask not applied.  im0255.ppm
Patches: 144
### Mask not applied.  im0319.ppm
Patches: 12
### Mask not applied.  im0001.ppm
Patches: 12
### Mask not applied.  im0163.ppm
Patches: 12
### Mask not applied.  im0044.ppm
Patches: 12
Training...
Traceback (most recent call last):
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\unet\__init__.py", line 52, in run
	epoch_run=trainer.epoch_ce_loss)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\torchtrainer\torchtrainer.py", line 84, in train
	epoch_run(epoch=epoch, data_loader=data_loader, logger=self.train_logger)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\torchtrainer\torchtrainer.py", line 221, in epoch_ce_loss
	for i, data in enumerate(kw['data_loader'], 1):
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\site-packages\torch\utils\data\dataloader.py", line 279, in __iter__
	return _MultiProcessingDataLoaderIter(self)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\site-packages\torch\utils\data\dataloader.py", line 719, in __init__
	w.start()
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\process.py", line 112, in start
	self._popen = self._Popen(self)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\context.py", line 223, in _Popen
	return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\context.py", line 322, in _Popen
	return Popen(process_obj)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
	reduction.dump(process_obj, to_child)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\reduction.py", line 60, in dump
	ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <function <lambda> at 0x000002DA8D01B318>: attribute lookup <lambda> on testarch.unet.runs failed
### SPLIT FOUND:  data\STARE\splits\STARE_4.json Loaded
Total Params: 1864322
Total Params: 31042434
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\spawn.py", line 105, in spawn_main
	exitcode = _main(fd)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\spawn.py", line 115, in _main
	self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
### Mask not applied.  im0002.ppm
### Mask not applied.  im0162.ppm
### Mask not applied.  im0291.ppm
### Mask not applied.  im0082.ppm
### Mask not applied.  im0077.ppm
### Mask not applied.  im0081.ppm
### Mask not applied.  im0236.ppm
### Mask not applied.  im0255.ppm
### Mask not applied.  im0005.ppm
### Mask not applied.  im0003.ppm
### Mask not applied.  im0235.ppm
### Mask not applied.  im0139.ppm
Patches: 144
### Mask not applied.  im0324.ppm
Patches: 12
### Mask not applied.  im0239.ppm
Patches: 12
### Mask not applied.  im0240.ppm
Patches: 12
### Mask not applied.  im0004.ppm
Patches: 12
Training...
[0, 0, 0, 0]
Traceback (most recent call last):
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\unet\__init__.py", line 52, in run
	epoch_run=trainer.epoch_ce_loss)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\torchtrainer\torchtrainer.py", line 84, in train
	epoch_run(epoch=epoch, data_loader=data_loader, logger=self.train_logger)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\torchtrainer\torchtrainer.py", line 221, in epoch_ce_loss
	for i, data in enumerate(kw['data_loader'], 1):
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\site-packages\torch\utils\data\dataloader.py", line 279, in __iter__
	return _MultiProcessingDataLoaderIter(self)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\site-packages\torch\utils\data\dataloader.py", line 719, in __init__
	w.start()
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\process.py", line 112, in start
	self._popen = self._Popen(self)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\context.py", line 223, in _Popen
	return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\context.py", line 322, in _Popen
	return Popen(process_obj)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
	reduction.dump(process_obj, to_child)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\reduction.py", line 60, in dump
	ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <function <lambda> at 0x000002DA8D01B318>: attribute lookup <lambda> on testarch.unet.runs failed
### SPLIT FOUND:  data\STARE\splits\STARE_0.json Loaded
### Mask not applied.  im0077.ppm
Total Params: 1864322
### SPLIT FOUND:  data\STARE\splits\STARE_1.json Loaded
Traceback (most recent call last):
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\miniunet\__init__.py", line 42, in run
	mode='train')
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\torchtrainer\datagen.py", line 88, in get_loader
	gen = cls(conf=conf, images=images, transforms=transforms, shuffle_indices=True, mode=mode)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\miniunet\miniunet_dataloader.py", line 29, in __init__
	self._load_indices()
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\miniunet\miniunet_dataloader.py", line 35, in _load_indices
	img_obj = self._get_image_obj(img_file)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\miniunet\miniunet_dataloader.py", line 60, in _get_image_obj
	self.unet_dir + sep + img_obj.file_name.split('.')[0] + self.input_image_ext, 1)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\utils\img_utils.py", line 186, in get_image_as_array
	img = IMG.open(image_file)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\site-packages\PIL\Image.py", line 2809, in open
	fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'logs\\STARE\\UNET_1_100_1\\im0077.png'
### Mask not applied.  im0324.ppm
### SPLIT FOUND:  data\STARE\splits\STARE_2.json Loaded
Traceback (most recent call last):
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\miniunet\__init__.py", line 42, in run
	mode='train')
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\torchtrainer\datagen.py", line 88, in get_loader
	gen = cls(conf=conf, images=images, transforms=transforms, shuffle_indices=True, mode=mode)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\miniunet\miniunet_dataloader.py", line 29, in __init__
	self._load_indices()
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\miniunet\miniunet_dataloader.py", line 35, in _load_indices
	img_obj = self._get_image_obj(img_file)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\miniunet\miniunet_dataloader.py", line 60, in _get_image_obj
	self.unet_dir + sep + img_obj.file_name.split('.')[0] + self.input_image_ext, 1)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\utils\img_utils.py", line 186, in get_image_as_array
	img = IMG.open(image_file)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\site-packages\PIL\Image.py", line 2809, in open
	fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'logs\\STARE\\UNET_1_100_1\\im0324.png'
Traceback (most recent call last):
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\miniunet\__init__.py", line 42, in run
	mode='train')
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\torchtrainer\datagen.py", line 88, in get_loader
	gen = cls(conf=conf, images=images, transforms=transforms, shuffle_indices=True, mode=mode)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\miniunet\miniunet_dataloader.py", line 29, in __init__
	self._load_indices()
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\miniunet\miniunet_dataloader.py", line 35, in _load_indices
	img_obj = self._get_image_obj(img_file)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\miniunet\miniunet_dataloader.py", line 60, in _get_image_obj
	self.unet_dir + sep + img_obj.file_name.split('.')[0] + self.input_image_ext, 1)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\utils\img_utils.py", line 186, in get_image_as_array
	img = IMG.open(image_file)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\site-packages\PIL\Image.py", line 2809, in open
	fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'logs\\STARE\\UNET_1_100_1\\im0324.png'
### Mask not applied.  im0324.ppm
### SPLIT FOUND:  data\STARE\splits\STARE_3.json Loaded
### Mask not applied.  im0324.ppm
### SPLIT FOUND:  data\STARE\splits\STARE_4.json Loaded
Traceback (most recent call last):
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\miniunet\__init__.py", line 42, in run
	mode='train')
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\torchtrainer\datagen.py", line 88, in get_loader
	gen = cls(conf=conf, images=images, transforms=transforms, shuffle_indices=True, mode=mode)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\miniunet\miniunet_dataloader.py", line 29, in __init__
	self._load_indices()
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\miniunet\miniunet_dataloader.py", line 35, in _load_indices
	img_obj = self._get_image_obj(img_file)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\miniunet\miniunet_dataloader.py", line 60, in _get_image_obj
	self.unet_dir + sep + img_obj.file_name.split('.')[0] + self.input_image_ext, 1)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\utils\img_utils.py", line 186, in get_image_as_array
	img = IMG.open(image_file)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\site-packages\PIL\Image.py", line 2809, in open
	fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'logs\\STARE\\UNET_1_100_1\\im0324.png'
### Mask not applied.  im0002.ppm
[0, 0, 0, 0]
### SPLIT FOUND:  data\STARE\splits\STARE_0.json Loaded
Traceback (most recent call last):
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\miniunet\__init__.py", line 42, in run
	mode='train')
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\torchtrainer\datagen.py", line 88, in get_loader
	gen = cls(conf=conf, images=images, transforms=transforms, shuffle_indices=True, mode=mode)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\miniunet\miniunet_dataloader.py", line 29, in __init__
	self._load_indices()
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\miniunet\miniunet_dataloader.py", line 35, in _load_indices
	img_obj = self._get_image_obj(img_file)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\miniunet\miniunet_dataloader.py", line 60, in _get_image_obj
	self.unet_dir + sep + img_obj.file_name.split('.')[0] + self.input_image_ext, 1)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\utils\img_utils.py", line 186, in get_image_as_array
	img = IMG.open(image_file)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\site-packages\PIL\Image.py", line 2809, in open
	fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'logs\\STARE\\UNET_1_100_1\\im0002.png'
Total Params: 31042434
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\spawn.py", line 105, in spawn_main
	exitcode = _main(fd)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\spawn.py", line 115, in _main
	self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
Total Params: 1864322
### Mask not applied.  im0077.ppm
### Mask not applied.  im0081.ppm
### Mask not applied.  im0236.ppm
### Mask not applied.  im0255.ppm
### Mask not applied.  im0005.ppm
### Mask not applied.  im0003.ppm
### Mask not applied.  im0235.ppm
### Mask not applied.  im0139.ppm
### Mask not applied.  im0319.ppm
### Mask not applied.  im0001.ppm
### Mask not applied.  im0163.ppm
### Mask not applied.  im0044.ppm
Patches: 144
### Mask not applied.  im0002.ppm
Patches: 12
### Mask not applied.  im0162.ppm
Patches: 12
### Mask not applied.  im0291.ppm
Patches: 12
### Mask not applied.  im0082.ppm
Patches: 12
Training...
Traceback (most recent call last):
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\testarch\unet\__init__.py", line 52, in run
	epoch_run=trainer.epoch_ce_loss)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\torchtrainer\torchtrainer.py", line 84, in train
	epoch_run(epoch=epoch, data_loader=data_loader, logger=self.train_logger)
  File "C:\Users\Cookie\Documents\GitKraken\deepdyn\torchtrainer\torchtrainer.py", line 221, in epoch_ce_loss
	for i, data in enumerate(kw['data_loader'], 1):
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\site-packages\torch\utils\data\dataloader.py", line 279, in __iter__
	return _MultiProcessingDataLoaderIter(self)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\site-packages\torch\utils\data\dataloader.py", line 719, in __init__
	w.start()
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\process.py", line 112, in start
	self._popen = self._Popen(self)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\context.py", line 223, in _Popen
	return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\context.py", line 322, in _Popen
	return Popen(process_obj)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
	reduction.dump(process_obj, to_child)
  File "C:\Users\Cookie\Anaconda3\envs\deepdyn\lib\multiprocessing\reduction.py", line 60, in dump
	ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <function <lambda> at 0x000002DA8D01B318>: attribute lookup <lambda> on testarch.unet.runs failed
### SPLIT FOUND:  data\STARE\splits\STARE_1.json Loaded

Process finished with exit code -1

I get a lot of warnings regarding:

Mask not applied

That's probably not very good.
Furthermore I get a lot of pickle errors:

_pickle.PicklingError: Can't pickle <function at 0x000002DA8D01B318>: attribute lookup on testarch.unet.runs failed

Do you know why this could be happening?

Best,
Karol

ModuleNotFoundError

Traceback (most recent call last):
File "main.py", line 1, in
import testarch.unet as net
File "C:\Users\HI\Desktop\ature-master\testarch\unet_init_.py", line 14, in
from utils.measurements import ScoreAccumulator
File "C:\Users\HI\Desktop\ature-master\utils\measurements.py", line 14, in
import imgcommons.utils as imgutils
ModuleNotFoundError: No module named 'imgcommons'

Where to see segmentation result images?

I finished running train/test but I am unsure where I can find the segmentation results for training and testing (similar to the ones in the readme). Could you point me to where the result images are? Thanks!

Pls provide weights

Hi,

could you provide your best pretrained weigths, so we can evaluate out of the box?
That would be really nice! :)

Best,
Karol

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.