Giter Site home page Giter Site logo

ancis-pytorch's Introduction

ANCIS-Pytorch

Attentive Neural Cell Instance Segmentation Article Link

Please cite the article in your publications if it helps your research:

@article{YI2019228,
	title = "Attentive neural cell instance segmentation",
	journal = "Medical Image Analysis",
	volume = "55",
	pages = "228 - 240",
	year = "2019",
	issn = "1361-8415",
	doi = "https://doi.org/10.1016/j.media.2019.05.004",
	url = "http://www.sciencedirect.com/science/article/pii/S1361841518308442",
	author = "Jingru Yi and Pengxiang Wu and Menglin Jiang and Qiaoying Huang and Daniel J. Hoeppner and Dimitris N. Metaxas"
}

Introduction

Neural cell instance segmentation, which aims at joint detection and segmentation of every neural cell in a microscopic image, is essential to many neuroscience applications. The challenge of this task involves cell adhesion, cell distortion, unclear cell contours, low-contrast cell protrusion structures, and background impurities. Consequently, current instance segmentation methods generally fall short of precision. In this paper, we propose an attentive instance segmentation method that accurately predicts the bounding box of each cell as well as its segmentation mask simultaneously. In particular, our method builds on a joint network that combines a single shot multi-box detector (SSD) and a U-net. Furthermore, we employ the attention mechanism in both detection and segmentation modules to focus the model on the useful features. The proposed method is validated on a dataset of neural cell microscopic images. Experimental results demonstrate that our approach can accurately detect and segment neural cell instances at a fast speed, comparing favorably with the state-of-the-art methods.

Dependencies

Library: OpenCV-Python, PyTorch>0.4.0, Ubuntu 14.04

Implementation Details

To accelerate the training process, we trained the detection and segmentation modules separately. In particular, the weights of the detection module are frozen when training the segmentation module.

Pretrained Weights

The pretrained weights on Kaggle dataset can be downloaded here. Note that the weights are trained on 402 training images from original 670 training dataset. We use 134 images for validation and 134 images for testing.

ancis-pytorch's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

ancis-pytorch's Issues

Running error about your code

I tried to run your code on my dataset shaped like kaggle dataset format. Firstly I run
python3 train_dec_kaggle.py --trainDir /home/xxx/ancis-test-data/train --testDir /home/xxx/ancis-test-data/val --batch_size 1 --num_epochs 10
and the process is fine. I got an file "end_model.pth" under folder "dec_weights". However when I use the model file to run command
python3 train_seg_kaggle.py --trainDir /home/xxx/ancis-test-data/train --testDir /home/xxx/ancis-test-data/val --batch_size 1 --num_epochs 10 --dec_weights dec_weights/end_model.pth
, I got the error like below:

Traceback (most recent call last): File "train_seg_kaggle.py", line 190, in <module> train(args) File "train_seg_kaggle.py", line 56, in train dec_model.load_state_dict(resume_dict) File "/home/ylink/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 777, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for ResNetSSD: Missing key(s) in state_dict: "bn1.weight", "bn1.bias", "bn1.running_mean", "bn1.running_var", "layer1.0.conv1.weight", "layer1.0.bn1.weight", "layer1.0.bn1.bias", "layer1.0.bn1.running_mean", "layer1.0.bn1.running_var", "layer1.0.conv2.weight", "layer1.0.bn2.weight", "layer1.0.bn2.bias", "layer1.0.bn2.running_mean", "layer1.0.bn2.running_var", "layer1.0.conv3.weight", "layer1.0.bn3.weight", "layer1.0.bn3.bias", "layer1.0.bn3.running_mean", "layer1.0.bn3.running_var", "layer1.0.downsample.0.weight", "layer1.0.downsample.1.weight", "layer1.0.downsample.1.bias", "layer1.0.downsample.1.running_mean", "layer1.0.downsample.1.running_var", "layer1.1.conv1.weight", "layer1.1.bn1.weight", "layer1.1.bn1.bias", "layer1.1.bn1.running_mean", "layer1.1.bn1.running_var", "layer1.1.conv2.weight", "layer1.1.bn2.weight", "layer1.1.bn2.bias", "layer1.1.bn2.running_mean", "layer1.1.bn2.running_var", "layer1.1.conv3.weight", "layer1.1.bn3.weight", "layer1.1.bn3.bias", "layer1.1.bn3.running_mean", "layer1.1.bn3.running_var", "layer1.2.conv1.weight", "layer1.2.bn1.weight", "layer1.2.bn1.bias", "layer1.2.bn1.running_mean", "layer1.2.bn1.running_var", "layer1.2.conv2.weight", "layer1.2.bn2.weight", "layer1.2.bn2.bias", "layer1.2.bn2.running_mean", "layer1.2.bn2.running_var", "layer1.2.conv3.weight", "layer1.2.bn3.weight", "layer1.2.bn3.bias", "layer1.2.bn3.running_mean", "layer1.2.bn3.running_var", "layer2.0.conv1.weight", "layer2.0.bn1.weight", "layer2.0.bn1.bias", "layer2.0.bn1.running_mean", "layer2.0.bn1.running_var", "layer2.0.conv2.weight", "layer2.0.bn2.weight", "layer2.0.bn2.bias", "layer2.0.bn2.running_mean", "layer2.0.bn2.running_var", "layer2.0.conv3.weight", "layer2.0.bn3.weight", "layer2.0.bn3.bias", "layer2.0.bn3.running_mean", "layer2.0.bn3.running_var", "layer2.0.downsample.0.weight", "layer2.0.downsample.1.weight", "layer2.0.downsample.1.bias", "layer2.0.downsample.1.running_mean", "layer2.0.downsample.1.running_var", "layer2.1.conv1.weight", "layer2.1.bn1.weight", "layer2.1.bn1.bias", "layer2.1.bn1.running_mean", "layer2.1.bn1.running_var", "layer2.1.conv2.weight", "layer2.1.bn2.weight", "layer2.1.bn2.bias", "layer2.1.bn2.running_mean", "layer2.1.bn2.running_var", "layer2.1.conv3.weight", "layer2.1.bn3.weight", "layer2.1.bn3.bias", "layer2.1.bn3.running_mean", "layer2.1.bn3.running_var", "layer2.2.conv1.weight", "layer2.2.bn1.weight", "layer2.2.bn1.bias", "layer2.2.bn1.running_mean", "layer2.2.bn1.running_var", "layer2.2.conv2.weight", "layer2.2.bn2.weight", "layer2.2.bn2.bias", "layer2.2.bn2.running_mean", "layer2.2.bn2.running_var", "layer2.2.conv3.weight", "layer2.2.bn3.weight", "layer2.2.bn3.bias", "layer2.2.bn3.running_mean", "layer2.2.bn3.running_var", "layer2.3.conv1.weight", "layer2.3.bn1.weight", "layer2.3.bn1.bias", "layer2.3.bn1.running_mean", "layer2.3.bn1.running_var", "layer2.3.conv2.weight", "layer2.3.bn2.weight", "layer2.3.bn2.bias", "layer2.3.bn2.running_mean", "layer2.3.bn2.running_var", "layer2.3.conv3.weight", "layer2.3.bn3.weight", "layer2.3.bn3.bias", "layer2.3.bn3.running_mean", "layer2.3.bn3.running_var", "layer3.0.conv1.weight", "layer3.0.bn1.weight", "layer3.0.bn1.bias", "layer3.0.bn1.running_mean", "layer3.0.bn1.running_var", "layer3.0.conv2.weight", "layer3.0.bn2.weight", "layer3.0.bn2.bias", "layer3.0.bn2.running_mean", "layer3.0.bn2.running_var", "layer3.0.conv3.weight", "layer3.0.bn3.weight", "layer3.0.bn3.bias", "layer3.0.bn3.running_mean", "layer3.0.bn3.running_var", "layer3.0.downsample.0.weight", "layer3.0.downsample.1.weight", "layer3.0.downsample.1.bias", "layer3.0.downsample.1.running_mean", "layer3.0.downsample.1.running_var", "layer3.1.conv1.weight", "layer3.1.bn1.weight", "layer3.1.bn1.bias", "layer3.1.bn1.running_mean", "layer3.1.bn1.running_var", "layer3.1.conv2.weight", "layer3.1.bn2.weight", "layer3.1.bn2.bias", "layer3.1.bn2.running_mean", "layer3.1.bn2.running_var", "layer3.1.conv3.weight", "layer3.1.bn3.weight", "layer3.1.bn3.bias", "layer3.1.bn3.running_mean", "layer3.1.bn3.running_var", "layer3.2.conv1.weight", "layer3.2.bn1.weight", "layer3.2.bn1.bias", "layer3.2.bn1.running_mean", "layer3.2.bn1.running_var", "layer3.2.conv2.weight", "layer3.2.bn2.weight", "layer3.2.bn2.bias", "layer3.2.bn2.running_mean", "layer3.2.bn2.running_var", "layer3.2.conv3.weight", "layer3.2.bn3.weight", "layer3.2.bn3.bias", "layer3.2.bn3.running_mean", "layer3.2.bn3.running_var", "layer3.3.conv1.weight", "layer3.3.bn1.weight", "layer3.3.bn1.bias", "layer3.3.bn1.running_mean", "layer3.3.bn1.running_var", "layer3.3.conv2.weight", "layer3.3.bn2.weight", "layer3.3.bn2.bias", "layer3.3.bn2.running_mean", "layer3.3.bn2.running_var", "layer3.3.conv3.weight", "layer3.3.bn3.weight", "layer3.3.bn3.bias", "layer3.3.bn3.running_mean", "layer3.3.bn3.running_var", "layer3.4.conv1.weight", "layer3.4.bn1.weight", "layer3.4.bn1.bias", "layer3.4.bn1.running_mean", "layer3.4.bn1.running_var", "layer3.4.conv2.weight", "layer3.4.bn2.weight", "layer3.4.bn2.bias", "layer3.4.bn2.running_mean", "layer3.4.bn2.running_var", "layer3.4.conv3.weight", "layer3.4.bn3.weight", "layer3.4.bn3.bias", "layer3.4.bn3.running_mean", "layer3.4.bn3.running_var", "layer3.5.conv1.weight", "layer3.5.bn1.weight", "layer3.5.bn1.bias", "layer3.5.bn1.running_mean", "layer3.5.bn1.running_var", "layer3.5.conv2.weight", "layer3.5.bn2.weight", "layer3.5.bn2.bias", "layer3.5.bn2.running_mean", "layer3.5.bn2.running_var", "layer3.5.conv3.weight", "layer3.5.bn3.weight", "layer3.5.bn3.bias", "layer3.5.bn3.running_mean", "layer3.5.bn3.running_var", "new_layer1.0.weight", "new_layer1.0.bias", "new_layer1.1.weight", "new_layer1.1.bias", "new_layer1.1.running_mean", "new_layer1.1.running_var", "new_layer1.3.weight", "new_layer1.3.bias", "new_layer1.4.weight", "new_layer1.4.bias", "new_layer1.4.running_mean", "new_layer1.4.running_var", "new_layer2.0.weight", "new_layer2.0.bias", "new_layer2.1.weight", "new_layer2.1.bias", "new_layer2.1.running_mean", "new_layer2.1.running_var", "new_layer2.3.weight", "new_layer2.3.bias", "new_layer2.4.weight", "new_layer2.4.bias", "new_layer2.4.running_mean", "new_layer2.4.running_var", "conf_c3.weight", "conf_c3.bias", "conf_c4.weight", "conf_c4.bias", "conf_c5.weight", "conf_c5.bias", "conf_c6.weight", "conf_c6.bias", "locs_c3.weight", "locs_c3.bias", "locs_c4.weight", "locs_c4.bias", "locs_c5.weight", "locs_c5.bias", "locs_c6.weight", "locs_c6.bias", "fusion_c3.weight", "fusion_c3.bias", "fusion_c4.weight", "fusion_c4.bias", "fusion_c5.weight", "fusion_c5.bias", "fusion_end.0.weight", "fusion_end.0.bias", "fusion_end.0.running_mean", "fusion_end.0.running_var", "fusion_end.1.weight", "fusion_end.1.bias", "att_c3.conv1.weight", "att_c3.conv2.weight", "att_c3.conv3.weight", "att_c4.conv1.weight", "att_c4.conv2.weight", "att_c4.conv3.weight", "att_c5.conv1.weight", "att_c5.conv2.weight", "att_c5.conv3.weight", "att_c6.conv1.weight", "att_c6.conv2.weight", "att_c6.conv3.weight". Unexpected key(s) in state_dict: "eight", "ght", "s", "ning_mean", "ning_var", "_batches_tracked", "0.conv1.weight", "0.bn1.weight", "0.bn1.bias", "0.bn1.running_mean", "0.bn1.running_var", "0.bn1.num_batches_tracked", "0.conv2.weight", "0.bn2.weight", "0.bn2.bias", "0.bn2.running_mean", "0.bn2.running_var", "0.bn2.num_batches_tracked", "0.conv3.weight", "0.bn3.weight", "0.bn3.bias", "0.bn3.running_mean", "0.bn3.running_var", "0.bn3.num_batches_tracked", "0.downsample.0.weight", "0.downsample.1.weight", "0.downsample.1.bias", "0.downsample.1.running_mean", "0.downsample.1.running_var", "0.downsample.1.num_batches_tracked", "1.conv1.weight", "1.bn1.weight", "1.bn1.bias", "1.bn1.running_mean", "1.bn1.running_var", "1.bn1.num_batches_tracked", "1.conv2.weight", "1.bn2.weight", "1.bn2.bias", "1.bn2.running_mean", "1.bn2.running_var", "1.bn2.num_batches_tracked", "1.conv3.weight", "1.bn3.weight", "1.bn3.bias", "1.bn3.running_mean", "1.bn3.running_var", "1.bn3.num_batches_tracked", "2.conv1.weight", "2.bn1.weight", "2.bn1.bias", "2.bn1.running_mean", "2.bn1.running_var", "2.bn1.num_batches_tracked", "2.conv2.weight", "2.bn2.weight", "2.bn2.bias", "2.bn2.running_mean", "2.bn2.running_var", "2.bn2.num_batches_tracked", "2.conv3.weight", "2.bn3.weight", "2.bn3.bias", "2.bn3.running_mean", "2.bn3.running_var", "2.bn3.num_batches_tracked", "3.conv1.weight", "3.bn1.weight", "3.bn1.bias", "3.bn1.running_mean", "3.bn1.running_var", "3.bn1.num_batches_tracked", "3.conv2.weight", "3.bn2.weight", "3.bn2.bias", "3.bn2.running_mean", "3.bn2.running_var", "3.bn2.num_batches_tracked", "3.conv3.weight", "3.bn3.weight", "3.bn3.bias", "3.bn3.running_mean", "3.bn3.running_var", "3.bn3.num_batches_tracked", "4.conv1.weight", "4.bn1.weight", "4.bn1.bias", "4.bn1.running_mean", "4.bn1.running_var", "4.bn1.num_batches_tracked", "4.conv2.weight", "4.bn2.weight", "4.bn2.bias", "4.bn2.running_mean", "4.bn2.running_var", "4.bn2.num_batches_tracked", "4.conv3.weight", "4.bn3.weight", "4.bn3.bias", "4.bn3.running_mean", "4.bn3.running_var", "4.bn3.num_batches_tracked", "5.conv1.weight", "5.bn1.weight", "5.bn1.bias", "5.bn1.running_mean", "5.bn1.running_var", "5.bn1.num_batches_tracked", "5.conv2.weight", "5.bn2.weight", "5.bn2.bias", "5.bn2.running_mean", "5.bn2.running_var", "5.bn2.num_batches_tracked", "5.conv3.weight", "5.bn3.weight", "5.bn3.bias", "5.bn3.running_mean", "5.bn3.running_var", "5.bn3.num_batches_tracked", "er1.0.weight", "er1.0.bias", "er1.1.weight", "er1.1.bias", "er1.1.running_mean", "er1.1.running_var", "er1.1.num_batches_tracked", "er1.3.weight", "er1.3.bias", "er1.4.weight", "er1.4.bias", "er1.4.running_mean", "er1.4.running_var", "er1.4.num_batches_tracked", "er2.0.weight", "er2.0.bias", "er2.1.weight", "er2.1.bias", "er2.1.running_mean", "er2.1.running_var", "er2.1.num_batches_tracked", "er2.3.weight", "er2.3.bias", "er2.4.weight", "er2.4.bias", "er2.4.running_mean", "er2.4.running_var", "er2.4.num_batches_tracked", ".weight", ".bias", "c3.weight", "c3.bias", "c4.weight", "c4.bias", "c5.weight", "c5.bias", "end.0.weight", "end.0.bias", "end.0.running_mean", "end.0.running_var", "end.0.num_batches_tracked", "end.1.weight", "end.1.bias", "conv2.weight", "conv3.weight". size mismatch for conv1.weight: copying a param with shape torch.Size([32, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 3, 7, 7]).
When I run "test_dec_kaggle.py" with command
Python3 test_dec_kaggle.py --testDir /home/xxx/ancis-test-data/test --resume dec_weights/end_model.pth
, I got errors:
Resuming training weights from dec_weights/end_model.pth ... Traceback (most recent call last): File "test_dec_kaggle.py", line 95, in <module> test(args) File "test_dec_kaggle.py", line 40, in test model.load_state_dict(model_dict) File "/home/ylink/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 777, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for ResNetSSD: Unexpected key(s) in state_dict: "eight", "ght", "s", "ning_mean", "ning_var", "_batches_tracked", "0.conv1.weight", "0.bn1.weight", "0.bn1.bias", "0.bn1.running_mean", "0.bn1.running_var", "0.bn1.num_batches_tracked", "0.conv2.weight", "0.bn2.weight", "0.bn2.bias", "0.bn2.running_mean", "0.bn2.running_var", "0.bn2.num_batches_tracked", "0.conv3.weight", "0.bn3.weight", "0.bn3.bias", "0.bn3.running_mean", "0.bn3.running_var", "0.bn3.num_batches_tracked", "0.downsample.0.weight", "0.downsample.1.weight", "0.downsample.1.bias", "0.downsample.1.running_mean", "0.downsample.1.running_var", "0.downsample.1.num_batches_tracked", "1.conv1.weight", "1.bn1.weight", "1.bn1.bias", "1.bn1.running_mean", "1.bn1.running_var", "1.bn1.num_batches_tracked", "1.conv2.weight", "1.bn2.weight", "1.bn2.bias", "1.bn2.running_mean", "1.bn2.running_var", "1.bn2.num_batches_tracked", "1.conv3.weight", "1.bn3.weight", "1.bn3.bias", "1.bn3.running_mean", "1.bn3.running_var", "1.bn3.num_batches_tracked", "2.conv1.weight", "2.bn1.weight", "2.bn1.bias", "2.bn1.running_mean", "2.bn1.running_var", "2.bn1.num_batches_tracked", "2.conv2.weight", "2.bn2.weight", "2.bn2.bias", "2.bn2.running_mean", "2.bn2.running_var", "2.bn2.num_batches_tracked", "2.conv3.weight", "2.bn3.weight", "2.bn3.bias", "2.bn3.running_mean", "2.bn3.running_var", "2.bn3.num_batches_tracked", "3.conv1.weight", "3.bn1.weight", "3.bn1.bias", "3.bn1.running_mean", "3.bn1.running_var", "3.bn1.num_batches_tracked", "3.conv2.weight", "3.bn2.weight", "3.bn2.bias", "3.bn2.running_mean", "3.bn2.running_var", "3.bn2.num_batches_tracked", "3.conv3.weight", "3.bn3.weight", "3.bn3.bias", "3.bn3.running_mean", "3.bn3.running_var", "3.bn3.num_batches_tracked", "4.conv1.weight", "4.bn1.weight", "4.bn1.bias", "4.bn1.running_mean", "4.bn1.running_var", "4.bn1.num_batches_tracked", "4.conv2.weight", "4.bn2.weight", "4.bn2.bias", "4.bn2.running_mean", "4.bn2.running_var", "4.bn2.num_batches_tracked", "4.conv3.weight", "4.bn3.weight", "4.bn3.bias", "4.bn3.running_mean", "4.bn3.running_var", "4.bn3.num_batches_tracked", "5.conv1.weight", "5.bn1.weight", "5.bn1.bias", "5.bn1.running_mean", "5.bn1.running_var", "5.bn1.num_batches_tracked", "5.conv2.weight", "5.bn2.weight", "5.bn2.bias", "5.bn2.running_mean", "5.bn2.running_var", "5.bn2.num_batches_tracked", "5.conv3.weight", "5.bn3.weight", "5.bn3.bias", "5.bn3.running_mean", "5.bn3.running_var", "5.bn3.num_batches_tracked", "er1.0.weight", "er1.0.bias", "er1.1.weight", "er1.1.bias", "er1.1.running_mean", "er1.1.running_var", "er1.1.num_batches_tracked", "er1.3.weight", "er1.3.bias", "er1.4.weight", "er1.4.bias", "er1.4.running_mean", "er1.4.running_var", "er1.4.num_batches_tracked", "er2.0.weight", "er2.0.bias", "er2.1.weight", "er2.1.bias", "er2.1.running_mean", "er2.1.running_var", "er2.1.num_batches_tracked", "er2.3.weight", "er2.3.bias", "er2.4.weight", "er2.4.bias", "er2.4.running_mean", "er2.4.running_var", "er2.4.num_batches_tracked", ".weight", ".bias", "c3.weight", "c3.bias", "c4.weight", "c4.bias", "c5.weight", "c5.bias", "end.0.weight", "end.0.bias", "end.0.running_mean", "end.0.running_var", "end.0.num_batches_tracked", "end.1.weight", "end.1.bias", "conv2.weight", "conv3.weight". size mismatch for conv1.weight: copying a param with shape torch.Size([32, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 3, 7, 7]).
When I use that model file to run
Python3 eval_dec_kaggle.py --testDir /home/xxx/ancis-test-data/test --resume dec_weights/end_model.pth
, I got error like below:
Traceback (most recent call last): File "eval_dec_kaggle.py", line 117, in <module> evaluation(args) File "eval_dec_kaggle.py", line 41, in evaluation model.load_state_dict(model_dict) File "/home/ylink/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 777, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for ResNetSSD: Unexpected key(s) in state_dict: "eight", "ght", "s", "ning_mean", "ning_var", "_batches_tracked", "0.conv1.weight", "0.bn1.weight", "0.bn1.bias", "0.bn1.running_mean", "0.bn1.running_var", "0.bn1.num_batches_tracked", "0.conv2.weight", "0.bn2.weight", "0.bn2.bias", "0.bn2.running_mean", "0.bn2.running_var", "0.bn2.num_batches_tracked", "0.conv3.weight", "0.bn3.weight", "0.bn3.bias", "0.bn3.running_mean", "0.bn3.running_var", "0.bn3.num_batches_tracked", "0.downsample.0.weight", "0.downsample.1.weight", "0.downsample.1.bias", "0.downsample.1.running_mean", "0.downsample.1.running_var", "0.downsample.1.num_batches_tracked", "1.conv1.weight", "1.bn1.weight", "1.bn1.bias", "1.bn1.running_mean", "1.bn1.running_var", "1.bn1.num_batches_tracked", "1.conv2.weight", "1.bn2.weight", "1.bn2.bias", "1.bn2.running_mean", "1.bn2.running_var", "1.bn2.num_batches_tracked", "1.conv3.weight", "1.bn3.weight", "1.bn3.bias", "1.bn3.running_mean", "1.bn3.running_var", "1.bn3.num_batches_tracked", "2.conv1.weight", "2.bn1.weight", "2.bn1.bias", "2.bn1.running_mean", "2.bn1.running_var", "2.bn1.num_batches_tracked", "2.conv2.weight", "2.bn2.weight", "2.bn2.bias", "2.bn2.running_mean", "2.bn2.running_var", "2.bn2.num_batches_tracked", "2.conv3.weight", "2.bn3.weight", "2.bn3.bias", "2.bn3.running_mean", "2.bn3.running_var", "2.bn3.num_batches_tracked", "3.conv1.weight", "3.bn1.weight", "3.bn1.bias", "3.bn1.running_mean", "3.bn1.running_var", "3.bn1.num_batches_tracked", "3.conv2.weight", "3.bn2.weight", "3.bn2.bias", "3.bn2.running_mean", "3.bn2.running_var", "3.bn2.num_batches_tracked", "3.conv3.weight", "3.bn3.weight", "3.bn3.bias", "3.bn3.running_mean", "3.bn3.running_var", "3.bn3.num_batches_tracked", "4.conv1.weight", "4.bn1.weight", "4.bn1.bias", "4.bn1.running_mean", "4.bn1.running_var", "4.bn1.num_batches_tracked", "4.conv2.weight", "4.bn2.weight", "4.bn2.bias", "4.bn2.running_mean", "4.bn2.running_var", "4.bn2.num_batches_tracked", "4.conv3.weight", "4.bn3.weight", "4.bn3.bias", "4.bn3.running_mean", "4.bn3.running_var", "4.bn3.num_batches_tracked", "5.conv1.weight", "5.bn1.weight", "5.bn1.bias", "5.bn1.running_mean", "5.bn1.running_var", "5.bn1.num_batches_tracked", "5.conv2.weight", "5.bn2.weight", "5.bn2.bias", "5.bn2.running_mean", "5.bn2.running_var", "5.bn2.num_batches_tracked", "5.conv3.weight", "5.bn3.weight", "5.bn3.bias", "5.bn3.running_mean", "5.bn3.running_var", "5.bn3.num_batches_tracked", "er1.0.weight", "er1.0.bias", "er1.1.weight", "er1.1.bias", "er1.1.running_mean", "er1.1.running_var", "er1.1.num_batches_tracked", "er1.3.weight", "er1.3.bias", "er1.4.weight", "er1.4.bias", "er1.4.running_mean", "er1.4.running_var", "er1.4.num_batches_tracked", "er2.0.weight", "er2.0.bias", "er2.1.weight", "er2.1.bias", "er2.1.running_mean", "er2.1.running_var", "er2.1.num_batches_tracked", "er2.3.weight", "er2.3.bias", "er2.4.weight", "er2.4.bias", "er2.4.running_mean", "er2.4.running_var", "er2.4.num_batches_tracked", ".weight", ".bias", "c3.weight", "c3.bias", "c4.weight", "c4.bias", "c5.weight", "c5.bias", "end.0.weight", "end.0.bias", "end.0.running_mean", "end.0.running_var", "end.0.num_batches_tracked", "end.1.weight", "end.1.bias", "conv2.weight", "conv3.weight". size mismatch for conv1.weight: copying a param with shape torch.Size([32, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 3, 7, 7]).
Could you help check why this error occurs?

Can't understand the hard negative mining at loss part, could you explain?

# hard negtive mining pos_mask = pos_mask.squeeze(2) # torch.Size([2, 24448]) p_conf_batch = p_conf.view(-1, self.num_classes) # torch.Size([48896,2]) temp = self.log_sum_exp(p_conf_batch) - p_conf_batch.gather(dim=1, index=t_conf.view(-1, 1)) temp = temp.view(batch_size, -1) temp[pos_mask] = 0. _, temp_idx = temp.sort(1, descending=True) _, idx_rank = temp_idx.sort(1) num_neg = torch.clamp(self.neg_pos_ratio * num_pos, max=pos_mask.size(1) - 1).squeeze(2) neg_mask = idx_rank < num_neg.expand_as(idx_rank)

When I want to replace the SSD module with RetinaNet, the way to calculate the diceloss, especially the classification loss make me some trouble. Could you explain how this part of code do negative mining at loss part, and do you have the same try on RetinaNet and have some advise?

Thank you for you code sharing!!

how to split train,val and test?

You said in the paper that the kaggle dataset is divided into 402 images for training, 134 images for testing, and 134 images for validation. So how you divided ? Could you give me a list?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.